Sample records for comprehensive experimental validation

  1. An experimentally validated network of nine haematopoietic transcription factors reveals mechanisms of cell state stability

    PubMed Central

    Schütte, Judith; Wang, Huange; Antoniou, Stella; Jarratt, Andrew; Wilson, Nicola K; Riepsaame, Joey; Calero-Nieto, Fernando J; Moignard, Victoria; Basilico, Silvia; Kinston, Sarah J; Hannah, Rebecca L; Chan, Mun Chiang; Nürnberg, Sylvia T; Ouwehand, Willem H; Bonzanni, Nicola; de Bruijn, Marella FTR; Göttgens, Berthold

    2016-01-01

    Transcription factor (TF) networks determine cell-type identity by establishing and maintaining lineage-specific expression profiles, yet reconstruction of mammalian regulatory network models has been hampered by a lack of comprehensive functional validation of regulatory interactions. Here, we report comprehensive ChIP-Seq, transgenic and reporter gene experimental data that have allowed us to construct an experimentally validated regulatory network model for haematopoietic stem/progenitor cells (HSPCs). Model simulation coupled with subsequent experimental validation using single cell expression profiling revealed potential mechanisms for cell state stabilisation, and also how a leukaemogenic TF fusion protein perturbs key HSPC regulators. The approach presented here should help to improve our understanding of both normal physiological and disease processes. DOI: http://dx.doi.org/10.7554/eLife.11469.001 PMID:26901438

  2. A Comprehensive Validation Methodology for Sparse Experimental Data

    NASA Technical Reports Server (NTRS)

    Norman, Ryan B.; Blattnig, Steve R.

    2010-01-01

    A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.

  3. Validity of Highlighting on Text Comprehension

    NASA Astrophysics Data System (ADS)

    So, Joey C. Y.; Chan, Alan H. S.

    2009-10-01

    In this study, 38 university students were tested with a Chinese reading task on an LED display under different task conditions for determining the effects of the highlighting and its validity on comprehension performance on light-emitting diodes (LED) display for Chinese reading. Four levels of validity (0%, 33%, 67% and 100%) and a control condition with no highlighting were tested. Each subject was required to perform the five experimental conditions in which different passages were read and comprehended. The results showed that the condition with 100% validity of highlighting was found to have better comprehension performance than other validity levels and conditions with no highlighting. The comprehension score of the condition without highlighting effect was comparatively lower than those highlighting conditions with distracters, though not significant.

  4. CFD Validation Experiment of a Mach 2.5 Axisymmetric Shock-Wave/Boundary-Layer Interaction

    NASA Technical Reports Server (NTRS)

    Davis, David O.

    2015-01-01

    Experimental investigations of specific flow phenomena, e.g., Shock Wave Boundary-Layer Interactions (SWBLI), provide great insight to the flow behavior but often lack the necessary details to be useful as CFD validation experiments. Reasons include: 1.Undefined boundary conditions Inconsistent results 2.Undocumented 3D effects (CL only measurements) 3.Lack of uncertainty analysis While there are a number of good subsonic experimental investigations that are sufficiently documented to be considered test cases for CFD and turbulence model validation, the number of supersonic and hypersonic cases is much less. This was highlighted by Settles and Dodsons [1] comprehensive review of available supersonic and hypersonic experimental studies. In all, several hundred studies were considered for their database.Of these, over a hundred were subjected to rigorous acceptance criteria. Based on their criteria, only 19 (12 supersonic, 7 hypersonic) were considered of sufficient quality to be used for validation purposes. Aeschliman and Oberkampf [2] recognized the need to develop a specific methodology for experimental studies intended specifically for validation purposes.

  5. Conditions for the Validity of Faraday's Law of Induction and Their Experimental Confirmation

    ERIC Educational Resources Information Center

    Lopez-Ramos, A.; Menendez, J. R.; Pique, C.

    2008-01-01

    This paper, as its main didactic objective, shows the conditions needed for the validity of Faraday's law of induction. Inadequate comprehension of these conditions has given rise to several paradoxes about the issue; some are analysed and solved in this paper in the light of the theoretical deduction of the induction law. Furthermore, an…

  6. EVLncRNAs: a manually curated database for long non-coding RNAs validated by low-throughput experiments.

    PubMed

    Zhou, Bailing; Zhao, Huiying; Yu, Jiafeng; Guo, Chengang; Dou, Xianghua; Song, Feng; Hu, Guodong; Cao, Zanxia; Qu, Yuanxu; Yang, Yuedong; Zhou, Yaoqi; Wang, Jihua

    2018-01-04

    Long non-coding RNAs (lncRNAs) play important functional roles in various biological processes. Early databases were utilized to deposit all lncRNA candidates produced by high-throughput experimental and/or computational techniques to facilitate classification, assessment and validation. As more lncRNAs are validated by low-throughput experiments, several databases were established for experimentally validated lncRNAs. However, these databases are small in scale (with a few hundreds of lncRNAs only) and specific in their focuses (plants, diseases or interactions). Thus, it is highly desirable to have a comprehensive dataset for experimentally validated lncRNAs as a central repository for all of their structures, functions and phenotypes. Here, we established EVLncRNAs by curating lncRNAs validated by low-throughput experiments (up to 1 May 2016) and integrating specific databases (lncRNAdb, LncRANDisease, Lnc2Cancer and PLNIncRBase) with additional functional and disease-specific information not covered previously. The current version of EVLncRNAs contains 1543 lncRNAs from 77 species that is 2.9 times larger than the current largest database for experimentally validated lncRNAs. Seventy-four percent lncRNA entries are partially or completely new, comparing to all existing experimentally validated databases. The established database allows users to browse, search and download as well as to submit experimentally validated lncRNAs. The database is available at http://biophy.dzu.edu.cn/EVLncRNAs. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. EVLncRNAs: a manually curated database for long non-coding RNAs validated by low-throughput experiments

    PubMed Central

    Zhao, Huiying; Yu, Jiafeng; Guo, Chengang; Dou, Xianghua; Song, Feng; Hu, Guodong; Cao, Zanxia; Qu, Yuanxu

    2018-01-01

    Abstract Long non-coding RNAs (lncRNAs) play important functional roles in various biological processes. Early databases were utilized to deposit all lncRNA candidates produced by high-throughput experimental and/or computational techniques to facilitate classification, assessment and validation. As more lncRNAs are validated by low-throughput experiments, several databases were established for experimentally validated lncRNAs. However, these databases are small in scale (with a few hundreds of lncRNAs only) and specific in their focuses (plants, diseases or interactions). Thus, it is highly desirable to have a comprehensive dataset for experimentally validated lncRNAs as a central repository for all of their structures, functions and phenotypes. Here, we established EVLncRNAs by curating lncRNAs validated by low-throughput experiments (up to 1 May 2016) and integrating specific databases (lncRNAdb, LncRANDisease, Lnc2Cancer and PLNIncRBase) with additional functional and disease-specific information not covered previously. The current version of EVLncRNAs contains 1543 lncRNAs from 77 species that is 2.9 times larger than the current largest database for experimentally validated lncRNAs. Seventy-four percent lncRNA entries are partially or completely new, comparing to all existing experimentally validated databases. The established database allows users to browse, search and download as well as to submit experimentally validated lncRNAs. The database is available at http://biophy.dzu.edu.cn/EVLncRNAs. PMID:28985416

  8. Comprehensive silicon solar-cell computer modeling

    NASA Technical Reports Server (NTRS)

    Lamorte, M. F.

    1984-01-01

    A comprehensive silicon solar cell computer modeling scheme was developed to perform the following tasks: (1) model and analysis of the net charge distribution in quasineutral regions; (2) experimentally determined temperature behavior of Spire Corp. n+pp+ solar cells where n+-emitter is formed by ion implantation of 75As or 31P; and (3) initial validation results of computer simulation program using Spire Corp. n+pp+ cells.

  9. A comprehensive combined experimental and computational framework for pre-clinical wear simulation of total knee replacements.

    PubMed

    Abdelgaied, A; Fisher, J; Jennings, L M

    2018-02-01

    A more robust pre-clinical wear simulation framework is required in order to simulate wider and higher ranges of activities, observed in different patient populations such as younger more active patients. Such a framework will help to understand and address the reported higher failure rates for younger and more active patients (National_Joint_Registry, 2016). The current study has developed and validated a comprehensive combined experimental and computational framework for pre-clinical wear simulation of total knee replacements (TKR). The input mechanical (elastic modulus and Poisson's ratio) and wear parameters of the moderately cross-linked ultra-high molecular weight polyethylene (UHMWPE) bearing material were independently measured from experimental studies under realistic test conditions, similar to the loading conditions found in the total knee replacements. The wear predictions from the computational wear simulation were validated against the direct experimental wear measurements for size 3 Sigma curved total knee replacements (DePuy, UK) in an independent experimental wear simulation study under three different daily activities; walking, deep squat, and stairs ascending kinematic conditions. The measured compressive mechanical properties of the moderately cross-linked UHMWPE material were more than 20% lower than that reported in the literature under tensile test conditions. The pin-on-plate wear coefficient of moderately cross-linked UHMWPE was significantly dependant of the contact stress and the degree of cross-shear at the articulating surfaces. The computational wear predictions for the TKR from the current framework were consistent and in a good agreement with the independent full TKR experimental wear simulation measurements, with 0.94 coefficient of determination of the framework. In addition, the comprehensive combined experimental and computational framework was able to explain the complex experimental wear trends from the three different daily activities investigated. Therefore, such a framework can be adopted as a pre-clinical simulation approach to optimise different designs, materials, as well as patient's specific total knee replacements for a range of activities. Copyright © 2017. Published by Elsevier Ltd.

  10. Three-dimensional computational fluid dynamics modelling and experimental validation of the Jülich Mark-F solid oxide fuel cell stack

    NASA Astrophysics Data System (ADS)

    Nishida, R. T.; Beale, S. B.; Pharoah, J. G.; de Haart, L. G. J.; Blum, L.

    2018-01-01

    This work is among the first where the results of an extensive experimental research programme are compared to performance calculations of a comprehensive computational fluid dynamics model for a solid oxide fuel cell stack. The model, which combines electrochemical reactions with momentum, heat, and mass transport, is used to obtain results for an established industrial-scale fuel cell stack design with complex manifolds. To validate the model, comparisons with experimentally gathered voltage and temperature data are made for the Jülich Mark-F, 18-cell stack operating in a test furnace. Good agreement is obtained between the model and experiment results for cell voltages and temperature distributions, confirming the validity of the computational methodology for stack design. The transient effects during ramp up of current in the experiment may explain a lower average voltage than model predictions for the power curve.

  11. Comprehensive preclinical evaluation of a multi-physics model of liver tumor radiofrequency ablation.

    PubMed

    Audigier, Chloé; Mansi, Tommaso; Delingette, Hervé; Rapaka, Saikiran; Passerini, Tiziano; Mihalef, Viorel; Jolly, Marie-Pierre; Pop, Raoul; Diana, Michele; Soler, Luc; Kamen, Ali; Comaniciu, Dorin; Ayache, Nicholas

    2017-09-01

    We aim at developing a framework for the validation of a subject-specific multi-physics model of liver tumor radiofrequency ablation (RFA). The RFA computation becomes subject specific after several levels of personalization: geometrical and biophysical (hemodynamics, heat transfer and an extended cellular necrosis model). We present a comprehensive experimental setup combining multimodal, pre- and postoperative anatomical and functional images, as well as the interventional monitoring of intra-operative signals: the temperature and delivered power. To exploit this dataset, an efficient processing pipeline is introduced, which copes with image noise, variable resolution and anisotropy. The validation study includes twelve ablations from five healthy pig livers: a mean point-to-mesh error between predicted and actual ablation extent of 5.3 ± 3.6 mm is achieved. This enables an end-to-end preclinical validation framework that considers the available dataset.

  12. Detailed Measurements of the Aeroelastic Response of a Rigid Coaxial Rotor in Hover

    DTIC Science & Technology

    2017-08-11

    included: hover testing of single and CCR rotors (Year 1), deformation measurement and modal identification of rotor blades in the non -rotating and...the rotor blades, as well as the detailed experimental data were shared with Dr. Rajneesh Singh and Dr. Hao Kang at Vehicle Technology Directorate...VTD), Aberdeen Proving Grounds, MD. In this way, the experimental data could be used to validate US Army comprehensive analysis tools, specifically

  13. Experimental annotation of the human genome using microarray technology.

    PubMed

    Shoemaker, D D; Schadt, E E; Armour, C D; He, Y D; Garrett-Engele, P; McDonagh, P D; Loerch, P M; Leonardson, A; Lum, P Y; Cavet, G; Wu, L F; Altschuler, S J; Edwards, S; King, J; Tsang, J S; Schimmack, G; Schelter, J M; Koch, J; Ziman, M; Marton, M J; Li, B; Cundiff, P; Ward, T; Castle, J; Krolewski, M; Meyer, M R; Mao, M; Burchard, J; Kidd, M J; Dai, H; Phillips, J W; Linsley, P S; Stoughton, R; Scherer, S; Boguski, M S

    2001-02-15

    The most important product of the sequencing of a genome is a complete, accurate catalogue of genes and their products, primarily messenger RNA transcripts and their cognate proteins. Such a catalogue cannot be constructed by computational annotation alone; it requires experimental validation on a genome scale. Using 'exon' and 'tiling' arrays fabricated by ink-jet oligonucleotide synthesis, we devised an experimental approach to validate and refine computational gene predictions and define full-length transcripts on the basis of co-regulated expression of their exons. These methods can provide more accurate gene numbers and allow the detection of mRNA splice variants and identification of the tissue- and disease-specific conditions under which genes are expressed. We apply our technique to chromosome 22q under 69 experimental condition pairs, and to the entire human genome under two experimental conditions. We discuss implications for more comprehensive, consistent and reliable genome annotation, more efficient, full-length complementary DNA cloning strategies and application to complex diseases.

  14. Experimental and computational surface and flow-field results for an all-body hypersonic aircraft

    NASA Technical Reports Server (NTRS)

    Lockman, William K.; Lawrence, Scott L.; Cleary, Joseph W.

    1990-01-01

    The objective of the present investigation is to establish a benchmark experimental data base for a generic hypersonic vehicle shape for validation and/or calibration of advanced computational fluid dynamics computer codes. This paper includes results from the comprehensive test program conducted in the NASA/Ames 3.5-foot Hypersonic Wind Tunnel for a generic all-body hypersonic aircraft model. Experimental and computational results on flow visualization, surface pressures, surface convective heat transfer, and pitot-pressure flow-field surveys are presented. Comparisons of the experimental results with computational results from an upwind parabolized Navier-Stokes code developed at Ames demonstrate the capabilities of this code.

  15. Numerical studies and metric development for validation of magnetohydrodynamic models on the HIT-SI experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, C., E-mail: hansec@uw.edu; Columbia University, New York, New York 10027; Victor, B.

    We present application of three scalar metrics derived from the Biorthogonal Decomposition (BD) technique to evaluate the level of agreement between macroscopic plasma dynamics in different data sets. BD decomposes large data sets, as produced by distributed diagnostic arrays, into principal mode structures without assumptions on spatial or temporal structure. These metrics have been applied to validation of the Hall-MHD model using experimental data from the Helicity Injected Torus with Steady Inductive helicity injection experiment. Each metric provides a measure of correlation between mode structures extracted from experimental data and simulations for an array of 192 surface-mounted magnetic probes. Numericalmore » validation studies have been performed using the NIMROD code, where the injectors are modeled as boundary conditions on the flux conserver, and the PSI-TET code, where the entire plasma volume is treated. Initial results from a comprehensive validation study of high performance operation with different injector frequencies are presented, illustrating application of the BD method. Using a simplified (constant, uniform density and temperature) Hall-MHD model, simulation results agree with experimental observation for two of the three defined metrics when the injectors are driven with a frequency of 14.5 kHz.« less

  16. The Dimensionality of Inference Making: Are Local and Global Inferences Distinguishable?

    ERIC Educational Resources Information Center

    Muijselaar, Marloes M. L.

    2018-01-01

    We investigated the dimensionality of inference making in samples of 4- to 9-year-olds (Ns = 416-783) to determine if local and global coherence inferences could be distinguished. In addition, we examined the validity of our experimenter-developed inference measure by comparing with three additional measures of listening comprehension. Multitrait,…

  17. The Reading Span Test and Its Predictive Power for Reading Comprehension Ability

    ERIC Educational Resources Information Center

    Friedman, Naomi P.; Miyake, Akira

    2004-01-01

    This study had two major goals: to test the effect of administration method on the criterion validity of a commonly used working memory span test, the reading span task, and to examine the relationship between processing and storage in this task. With respect to the first goal, although experimenter- and participant-administered reading span tasks…

  18. Experimental aerothermodynamic research of hypersonic aircraft

    NASA Technical Reports Server (NTRS)

    Cleary, Joseph W.

    1987-01-01

    The 2-D and 3-D advance computer codes being developed for use in the design of such hypersonic aircraft as the National Aero-Space Plane require comparison of the computational results with a broad spectrum of experimental data to fully assess the validity of the codes. This is particularly true for complex flow fields with control surfaces present and for flows with separation, such as leeside flow. Therefore, the objective is to provide a hypersonic experimental data base required for validation of advanced computational fluid dynamics (CFD) computer codes and for development of more thorough understanding of the flow physics necessary for these codes. This is being done by implementing a comprehensive test program for a generic all-body hypersonic aircraft model in the NASA/Ames 3.5 foot Hypersonic Wind Tunnel over a broad range of test conditions to obtain pertinent surface and flowfield data. Results from the flow visualization portion of the investigation are presented.

  19. Design, development, testing and validation of a Photonics Virtual Laboratory for the study of LEDs

    NASA Astrophysics Data System (ADS)

    Naranjo, Francisco L.; Martínez, Guadalupe; Pérez, Ángel L.; Pardo, Pedro J.

    2014-07-01

    This work presents the design, development, testing and validation of a Photonic Virtual Laboratory, highlighting the study of LEDs. The study was conducted from a conceptual, experimental and didactic standpoint, using e-learning and m-learning platforms. Specifically, teaching tools that help ensure that our students perform significant learning have been developed. It has been brought together the scientific aspect, such as the study of LEDs, with techniques of generation and transfer of knowledge through the selection, hierarchization and structuring of information using concept maps. For the validation of the didactic materials developed, it has been used procedures with various assessment tools for the collection and processing of data, applied in the context of an experimental design. Additionally, it was performed a statistical analysis to determine the validity of the materials developed. The assessment has been designed to validate the contributions of the new materials developed over the traditional method of teaching, and to quantify the learning achieved by students, in order to draw conclusions that serve as a reference for its application in the teaching and learning processes, and comprehensively validate the work carried out.

  20. Experimental and computational flow-field results for an all-body hypersonic aircraft

    NASA Technical Reports Server (NTRS)

    Cleary, Joseph W.

    1989-01-01

    A comprehensive test program is defined which is being implemented in the NASA/Ames 3.5 foot Hypersonic Wind Tunnel for obtaining data on a generic all-body hypersonic vehicle for computational fluid dynamics (CFD) code validation. Computational methods (approximate inviscid methods and an upwind parabolized Navier-Stokes code) currently being applied to the all-body model are outlined. Experimental and computational results on surface pressure distributions and Pitot-pressure surveys for the basic sharp-nose model (without control surfaces) at a free-stream Mach number of 7 are presented.

  1. miRTarBase update 2018: a resource for experimentally validated microRNA-target interactions.

    PubMed

    Chou, Chih-Hung; Shrestha, Sirjana; Yang, Chi-Dung; Chang, Nai-Wen; Lin, Yu-Ling; Liao, Kuang-Wen; Huang, Wei-Chi; Sun, Ting-Hsuan; Tu, Siang-Jyun; Lee, Wei-Hsiang; Chiew, Men-Yee; Tai, Chun-San; Wei, Ting-Yen; Tsai, Tzi-Ren; Huang, Hsin-Tzu; Wang, Chung-Yu; Wu, Hsin-Yi; Ho, Shu-Yi; Chen, Pin-Rong; Chuang, Cheng-Hsun; Hsieh, Pei-Jung; Wu, Yi-Shin; Chen, Wen-Liang; Li, Meng-Ju; Wu, Yu-Chun; Huang, Xin-Yi; Ng, Fung Ling; Buddhakosai, Waradee; Huang, Pei-Chun; Lan, Kuan-Chun; Huang, Chia-Yen; Weng, Shun-Long; Cheng, Yeong-Nan; Liang, Chao; Hsu, Wen-Lian; Huang, Hsien-Da

    2018-01-04

    MicroRNAs (miRNAs) are small non-coding RNAs of ∼ 22 nucleotides that are involved in negative regulation of mRNA at the post-transcriptional level. Previously, we developed miRTarBase which provides information about experimentally validated miRNA-target interactions (MTIs). Here, we describe an updated database containing 422 517 curated MTIs from 4076 miRNAs and 23 054 target genes collected from over 8500 articles. The number of MTIs curated by strong evidence has increased ∼1.4-fold since the last update in 2016. In this updated version, target sites validated by reporter assay that are available in the literature can be downloaded. The target site sequence can extract new features for analysis via a machine learning approach which can help to evaluate the performance of miRNA-target prediction tools. Furthermore, different ways of browsing enhance user browsing specific MTIs. With these improvements, miRTarBase serves as more comprehensively annotated, experimentally validated miRNA-target interactions databases in the field of miRNA related research. miRTarBase is available at http://miRTarBase.mbc.nctu.edu.tw/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. Experimental Validation: Subscale Aircraft Ground Facilities and Integrated Test Capability

    NASA Technical Reports Server (NTRS)

    Bailey, Roger M.; Hostetler, Robert W., Jr.; Barnes, Kevin N.; Belcastro, Celeste M.; Belcastro, Christine M.

    2005-01-01

    Experimental testing is an important aspect of validating complex integrated safety critical aircraft technologies. The Airborne Subscale Transport Aircraft Research (AirSTAR) Testbed is being developed at NASA Langley to validate technologies under conditions that cannot be flight validated with full-scale vehicles. The AirSTAR capability comprises a series of flying sub-scale models, associated ground-support equipment, and a base research station at NASA Langley. The subscale model capability utilizes a generic 5.5% scaled transport class vehicle known as the Generic Transport Model (GTM). The AirSTAR Ground Facilities encompass the hardware and software infrastructure necessary to provide comprehensive support services for the GTM testbed. The ground facilities support remote piloting of the GTM aircraft, and include all subsystems required for data/video telemetry, experimental flight control algorithm implementation and evaluation, GTM simulation, data recording/archiving, and audio communications. The ground facilities include a self-contained, motorized vehicle serving as a mobile research command/operations center, capable of deployment to remote sites when conducting GTM flight experiments. The ground facilities also include a laboratory based at NASA LaRC providing near identical capabilities as the mobile command/operations center, as well as the capability to receive data/video/audio from, and send data/audio to the mobile command/operations center during GTM flight experiments.

  3. A Comprehensive Validation Approach Using The RAVEN Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua J

    2015-06-01

    The RAVEN computer code , developed at the Idaho National Laboratory, is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. RAVEN is a multi-purpose probabilistic and uncertainty quantification platform, capable to communicate with any system code. A natural extension of the RAVEN capabilities is the imple- mentation of an integrated validation methodology, involving several different metrics, that represent an evolution of the methods currently used in the field. The state-of-art vali- dation approaches use neither exploration of the input space through sampling strategies, nor a comprehensive variety of metrics neededmore » to interpret the code responses, with respect experimental data. The RAVEN code allows to address both these lacks. In the following sections, the employed methodology, and its application to the newer developed thermal-hydraulic code RELAP-7, is reported.The validation approach has been applied on an integral effect experiment, representing natu- ral circulation, based on the activities performed by EG&G Idaho. Four different experiment configurations have been considered and nodalized.« less

  4. Measurement of fatigue: Comparison of the reliability and validity of single-item and short measures to a comprehensive measure.

    PubMed

    Kim, Hee-Ju; Abraham, Ivo

    2017-01-01

    Evidence is needed on the clinicometric properties of single-item or short measures as alternatives to comprehensive measures. We examined whether two single-item fatigue measures (i.e., Likert scale, numeric rating scale) or a short fatigue measure were comparable to a comprehensive measure in reliability (i.e., internal consistency and test-retest reliability) and validity (i.e., convergent, concurrent, and predictive validity) in Korean young adults. For this quantitative study, we selected the Functional Assessment of Chronic Illness Therapy-Fatigue for the comprehensive measure and the Profile of Mood States-Brief, Fatigue subscale for the short measure; and constructed two single-item measures. A total of 368 students from four nursing colleges in South Korea participated. We used Cronbach's alpha and item-total correlation for internal consistency reliability and intraclass correlation coefficient for test-retest reliability. We assessed Pearson's correlation with a comprehensive measure for convergent validity, with perceived stress level and sleep quality for concurrent validity and the receiver operating characteristic curve for predictive validity. The short measure was comparable to the comprehensive measure in internal consistency reliability (Cronbach's alpha=0.81 vs. 0.88); test-retest reliability (intraclass correlation coefficient=0.66 vs. 0.61); convergent validity (r with comprehensive measure=0.79); concurrent validity (r with perceived stress=0.55, r with sleep quality=0.39) and predictive validity (area under curve=0.88). Single-item measures were not comparable to the comprehensive measure. A short fatigue measure exhibited similar levels of reliability and validity to the comprehensive measure in Korean young adults. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Time series modeling of human operator dynamics in manual control tasks

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.

    1984-01-01

    A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency responses of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that has not been previously modeled to demonstrate the strengths of the method.

  6. Time Series Modeling of Human Operator Dynamics in Manual Control Tasks

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.

    1984-01-01

    A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency response of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that was previously modeled to demonstrate the strengths of the method.

  7. LEWICE 2.2 Capabilities and Thermal Validation

    NASA Technical Reports Server (NTRS)

    Wright, William B.

    2002-01-01

    A computational model of bleed air anti-icing and electrothermal de-icing have been added to the LEWICE 2.0 software by integrating the capabilities of two previous programs, ANTICE and LEWICE/ Thermal. This combined model has been released as LEWICE version 2.2. Several advancements have also been added to the previous capabilities of each module. This report will present the capabilities of the software package and provide results for both bleed air and electrothermal cases. A comprehensive validation effort has also been performed to compare the predictions to an existing electrothermal database. A quantitative comparison shows that for deicing cases, the average difference is 9.4 F (26%) compared to 3 F for the experimental data while for evaporative cases the average difference is 2 F (32%) compared to an experimental error of 4 F.

  8. VIRmiRNA: a comprehensive resource for experimentally validated viral miRNAs and their targets.

    PubMed

    Qureshi, Abid; Thakur, Nishant; Monga, Isha; Thakur, Anamika; Kumar, Manoj

    2014-01-01

    Viral microRNAs (miRNAs) regulate gene expression of viral and/or host genes to benefit the virus. Hence, miRNAs play a key role in host-virus interactions and pathogenesis of viral diseases. Lately, miRNAs have also shown potential as important targets for the development of novel antiviral therapeutics. Although several miRNA and their target repositories are available for human and other organisms in literature, but a dedicated resource on viral miRNAs and their targets are lacking. Therefore, we have developed a comprehensive viral miRNA resource harboring information of 9133 entries in three subdatabases. This includes 1308 experimentally validated miRNA sequences with their isomiRs encoded by 44 viruses in viral miRNA ' VIRMIRNA: ' and 7283 of their target genes in ' VIRMIRTAR': . Additionally, there is information of 542 antiviral miRNAs encoded by the host against 24 viruses in antiviral miRNA ' AVIRMIR': . The web interface was developed using Linux-Apache-MySQL-PHP (LAMP) software bundle. User-friendly browse, search, advanced search and useful analysis tools are also provided on the web interface. VIRmiRNA is the first specialized resource of experimentally proven virus-encoded miRNAs and their associated targets. This database would enhance the understanding of viral/host gene regulation and may also prove beneficial in the development of antiviral therapeutics. Database URL: http://crdd.osdd.net/servers/virmirna. © The Author(s) 2014. Published by Oxford University Press.

  9. Numerical Simulation on Hydrodynamics and Combustion in a Circulating Fluidized Bed under O2/CO2 and Air Atmospheres

    NASA Astrophysics Data System (ADS)

    Zhou, W.; Zhao, C. S.; Duan, L. B.; Qu, C. R.; Lu, J. Y.; Chen, X. P.

    Oxy-fuel circulating fluidized bed (CFB) combustion technology is in the stage of initial development for carbon capture and storage (CCS). Numerical simulation is helpful to better understanding the combustion process and will be significant for CFB scale-up. In this paper, a computational fluid dynamics (CFD) model was employed to simulate the hydrodynamics of gas-solid flow in a CFB riser based on the Eulerian-Granular multiphase model. The cold model predicted the main features of the complex gas-solid flow, including the cluster formation of the solid phase along the walls, the flow structure of up-flow in the core and downward flow in the annular region. Furthermore, coal devolatilization, char combustion and heat transfer were considered by coupling semi-empirical sub-models with CFD model to establish a comprehensive model. The gas compositions and temperature profiles were predicted and the outflow gas fractions are validated with the experimental data in air combustion. With the experimentally validated model being applied, the concentration and temperature distributions in O2/CO2 combustion were predicted. The model is useful for the further development of a comprehensive model including more sub-models, such as pollutant emissions, and better understanding the combustion process in furnace.

  10. Quantitative micro-CT based coronary artery profiling using interactive local thresholding and cylindrical coordinates.

    PubMed

    Panetta, Daniele; Pelosi, Gualtiero; Viglione, Federica; Kusmic, Claudia; Terreni, Marianna; Belcari, Nicola; Guerra, Alberto Del; Athanasiou, Lambros; Exarchos, Themistoklis; Fotiadis, Dimitrios I; Filipovic, Nenad; Trivella, Maria Giovanna; Salvadori, Piero A; Parodi, Oberdan

    2015-01-01

    Micro-CT is an established imaging technique for high-resolution non-destructive assessment of vascular samples, which is gaining growing interest for investigations of atherosclerotic arteries both in humans and in animal models. However, there is still a lack in the definition of micro-CT image metrics suitable for comprehensive evaluation and quantification of features of interest in the field of experimental atherosclerosis (ATS). A novel approach to micro-CT image processing for profiling of coronary ATS is described, providing comprehensive visualization and quantification of contrast agent-free 3D high-resolution reconstruction of full-length artery walls. Accelerated coronary ATS has been induced by high fat cholesterol-enriched diet in swine and left coronary artery (LCA) harvested en bloc for micro-CT scanning and histologic processing. A cylindrical coordinate system has been defined on the image space after curved multiplanar reformation of the coronary vessel for the comprehensive visualization of the main vessel features such as wall thickening and calcium content. A novel semi-automatic segmentation procedure based on 2D histograms has been implemented and the quantitative results validated by histology. The potentiality of attenuation-based micro-CT at low kV to reliably separate arterial wall layers from adjacent tissue as well as identify wall and plaque contours and major tissue components has been validated by histology. Morphometric indexes from histological data corresponding to several micro-CT slices have been derived (double observer evaluation at different coronary ATS stages) and highly significant correlations (R2 > 0.90) evidenced. Semi-automatic morphometry has been validated by double observer manual morphometry of micro-CT slices and highly significant correlations were found (R2 > 0.92). The micro-CT methodology described represents a handy and reliable tool for quantitative high resolution and contrast agent free full length coronary wall profiling, able to assist atherosclerotic vessels morphometry in a preclinical experimental model of coronary ATS and providing a link between in vivo imaging and histology.

  11. Bioinformatics approach for choosing the correct reference genes when studying gene expression in human keratinocytes.

    PubMed

    Beer, Lucian; Mlitz, Veronika; Gschwandtner, Maria; Berger, Tanja; Narzt, Marie-Sophie; Gruber, Florian; Brunner, Patrick M; Tschachler, Erwin; Mildner, Michael

    2015-10-01

    Reverse transcription polymerase chain reaction (qRT-PCR) has become a mainstay in many areas of skin research. To enable quantitative analysis, it is necessary to analyse expression of reference genes (RGs) for normalization of target gene expression. The selection of reliable RGs therefore has an important impact on the experimental outcome. In this study, we aimed to identify and validate the best suited RGs for qRT-PCR in human primary keratinocytes (KCs) over a broad range of experimental conditions using the novel bioinformatics tool 'RefGenes', which is based on a manually curated database of published microarray data. Expression of 6 RGs identified by RefGenes software and 12 commonly used RGs were validated by qRT-PCR. We assessed whether these 18 markers fulfilled the requirements for a valid RG by the comprehensive ranking of four bioinformatics tools and the coefficient of variation (CV). In an overall ranking, we found GUSB to be the most stably expressed RG, whereas the expression values of the commonly used RGs, GAPDH and B2M were significantly affected by varying experimental conditions. Our results identify RefGenes as a powerful tool for the identification of valid RGs and suggest GUSB as the most reliable RG for KCs. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Validation and Comprehension of Text Information: Two Sides of the Same Coin

    ERIC Educational Resources Information Center

    Richter, Tobias

    2015-01-01

    In psychological research, the comprehension of linguistic information and the knowledge-based assessment of its validity are often regarded as two separate stages of information processing. Recent findings in psycholinguistics and text comprehension research call this two-stage model into question. In particular, validation can affect…

  13. Comprehensive Numerical Simulation of Filling and Solidification of Steel Ingots

    PubMed Central

    Pola, Annalisa; Gelfi, Marcello; La Vecchia, Giovina Marina

    2016-01-01

    In this paper, a complete three-dimensional numerical model of mold filling and solidification of steel ingots is presented. The risk of powder entrapment and defects formation during filling is analyzed in detail, demonstrating the importance of using a comprehensive geometry, with trumpet and runner, compared to conventional simplified models. By using a case study, it was shown that the simplified model significantly underestimates the defects sources, reducing the utility of simulations in supporting mold and process design. An experimental test was also performed on an instrumented mold and the measurements were compared to the calculation results. The good agreement between calculation and trial allowed validating the simulation. PMID:28773890

  14. Coupled CFD/CSD Analysis of an Active-Twist Rotor in a Wind Tunnel with Experimental Validation

    NASA Technical Reports Server (NTRS)

    Massey, Steven J.; Kreshock, Andrew R.; Sekula, Martin K.

    2015-01-01

    An unsteady Reynolds averaged Navier-Stokes analysis loosely coupled with a comprehensive rotorcraft code is presented for a second-generation active-twist rotor. High fidelity Navier-Stokes results for three configurations: an isolated rotor, a rotor with fuselage, and a rotor with fuselage mounted in a wind tunnel, are compared to lifting-line theory based comprehensive rotorcraft code calculations and wind tunnel data. Results indicate that CFD/CSD predictions of flapwise bending moments are in good agreement with wind tunnel measurements for configurations with a fuselage, and that modeling the wind tunnel environment does not significantly enhance computed results. Actuated rotor results for the rotor with fuselage configuration are also validated for predictions of vibratory blade loads and fixed-system vibratory loads. Varying levels of agreement with wind tunnel measurements are observed for blade vibratory loads, depending on the load component (flap, lag, or torsion) and the harmonic being examined. Predicted trends in fixed-system vibratory loads are in good agreement with wind tunnel measurements.

  15. Soft Clustering Criterion Functions for Partitional Document Clustering

    DTIC Science & Technology

    2004-05-26

    in the clus- ter that it already belongs to. The refinement phase ends, as soon as we perform an iteration in which no documents moved between...for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 26 MAY 2004 2... it with the one obtained by the hard criterion functions. We present a comprehensive experimental evaluation involving twelve differ- ent datasets

  16. Experimentally valid predictions of muscle force and EMG in models of motor-unit function are most sensitive to neural properties.

    PubMed

    Keenan, Kevin G; Valero-Cuevas, Francisco J

    2007-09-01

    Computational models of motor-unit populations are the objective implementations of the hypothesized mechanisms by which neural and muscle properties give rise to electromyograms (EMGs) and force. However, the variability/uncertainty of the parameters used in these models--and how they affect predictions--confounds assessing these hypothesized mechanisms. We perform a large-scale computational sensitivity analysis on the state-of-the-art computational model of surface EMG, force, and force variability by combining a comprehensive review of published experimental data with Monte Carlo simulations. To exhaustively explore model performance and robustness, we ran numerous iterative simulations each using a random set of values for nine commonly measured motor neuron and muscle parameters. Parameter values were sampled across their reported experimental ranges. Convergence after 439 simulations found that only 3 simulations met our two fitness criteria: approximating the well-established experimental relations for the scaling of EMG amplitude and force variability with mean force. An additional 424 simulations preferentially sampling the neighborhood of those 3 valid simulations converged to reveal 65 additional sets of parameter values for which the model predictions approximate the experimentally known relations. We find the model is not sensitive to muscle properties but very sensitive to several motor neuron properties--especially peak discharge rates and recruitment ranges. Therefore to advance our understanding of EMG and muscle force, it is critical to evaluate the hypothesized neural mechanisms as implemented in today's state-of-the-art models of motor unit function. We discuss experimental and analytical avenues to do so as well as new features that may be added in future implementations of motor-unit models to improve their experimental validity.

  17. Perspectives on the simulation of protein–surface interactions using empirical force field methods

    PubMed Central

    Latour, Robert A.

    2014-01-01

    Protein–surface interactions are of fundamental importance for a broad range of applications in the fields of biomaterials and biotechnology. Present experimental methods are limited in their ability to provide a comprehensive depiction of these interactions at the atomistic level. In contrast, empirical force field based simulation methods inherently provide the ability to predict and visualize protein–surface interactions with full atomistic detail. These methods, however, must be carefully developed, validated, and properly applied before confidence can be placed in results from the simulations. In this perspectives paper, I provide an overview of the critical aspects that I consider being of greatest importance for the development of these methods, with a focus on the research that my combined experimental and molecular simulation groups have conducted over the past decade to address these issues. These critical issues include the tuning of interfacial force field parameters to accurately represent the thermodynamics of interfacial behavior, adequate sampling of these types of complex molecular systems to generate results that can be comparable with experimental data, and the generation of experimental data that can be used for simulation results evaluation and validation. PMID:25028242

  18. A Dynamic Speech Comprehension Test for Assessing Real-World Listening Ability.

    PubMed

    Best, Virginia; Keidser, Gitte; Freeston, Katrina; Buchholz, Jörg M

    2016-07-01

    Many listeners with hearing loss report particular difficulties with multitalker communication situations, but these difficulties are not well predicted using current clinical and laboratory assessment tools. The overall aim of this work is to create new speech tests that capture key aspects of multitalker communication situations and ultimately provide better predictions of real-world communication abilities and the effect of hearing aids. A test of ongoing speech comprehension introduced previously was extended to include naturalistic conversations between multiple talkers as targets, and a reverberant background environment containing competing conversations. In this article, we describe the development of this test and present a validation study. Thirty listeners with normal hearing participated in this study. Speech comprehension was measured for one-, two-, and three-talker passages at three different signal-to-noise ratios (SNRs), and working memory ability was measured using the reading span test. Analyses were conducted to examine passage equivalence, learning effects, and test-retest reliability, and to characterize the effects of number of talkers and SNR. Although we observed differences in difficulty across passages, it was possible to group the passages into four equivalent sets. Using this grouping, we achieved good test-retest reliability and observed no significant learning effects. Comprehension performance was sensitive to the SNR but did not decrease as the number of talkers increased. Individual performance showed associations with age and reading span score. This new dynamic speech comprehension test appears to be valid and suitable for experimental purposes. Further work will explore its utility as a tool for predicting real-world communication ability and hearing aid benefit. American Academy of Audiology.

  19. NASA National Combustion Code Simulations

    NASA Technical Reports Server (NTRS)

    Iannetti, Anthony; Davoudzadeh, Farhad

    2001-01-01

    A systematic effort is in progress to further validate the National Combustion Code (NCC) that has been developed at NASA Glenn Research Center (GRC) for comprehensive modeling and simulation of aerospace combustion systems. The validation efforts include numerical simulation of the gas-phase combustor experiments conducted at the Center for Turbulence Research (CTR), Stanford University, followed by comparison and evaluation of the computed results with the experimental data. Presently, at GRC, a numerical model of the experimental gaseous combustor is built to simulate the experimental model. The constructed numerical geometry includes the flow development sections for air annulus and fuel pipe, 24 channel air and fuel swirlers, hub, combustor, and tail pipe. Furthermore, a three-dimensional multi-block, multi-grid grid (1.6 million grid points, 3-levels of multi-grid) is generated. Computational simulation of the gaseous combustor flow field operating on methane fuel has started. The computational domain includes the whole flow regime starting from the fuel pipe and the air annulus, through the 12 air and 12 fuel channels, in the combustion region and through the tail pipe.

  20. Validation and Comprehension: An Integrated Overview

    ERIC Educational Resources Information Center

    Kendeou, Panayiota

    2014-01-01

    In this article, I review and discuss the work presented in this special issue while focusing on a number of issues that warrant further investigation in validation research. These issues pertain to the nature of the validation processes, the processes and mechanisms that support validation during comprehension, the factors that influence…

  1. A comprehensive computational model of sound transmission through the porcine lung

    PubMed Central

    Dai, Zoujun; Peng, Ying; Henry, Brian M.; Mansy, Hansen A.; Sandler, Richard H.; Royston, Thomas J.

    2014-01-01

    A comprehensive computational simulation model of sound transmission through the porcine lung is introduced and experimentally evaluated. This “subject-specific” model utilizes parenchymal and major airway geometry derived from x-ray CT images. The lung parenchyma is modeled as a poroviscoelastic material using Biot theory. A finite element (FE) mesh of the lung that includes airway detail is created and used in comsol FE software to simulate the vibroacoustic response of the lung to sound input at the trachea. The FE simulation model is validated by comparing simulation results to experimental measurements using scanning laser Doppler vibrometry on the surface of an excised, preserved lung. The FE model can also be used to calculate and visualize vibroacoustic pressure and motion inside the lung and its airways caused by the acoustic input. The effect of diffuse lung fibrosis and of a local tumor on the lung acoustic response is simulated and visualized using the FE model. In the future, this type of visualization can be compared and matched with experimentally obtained elastographic images to better quantify regional lung material properties to noninvasively diagnose and stage disease and response to treatment. PMID:25190415

  2. A comprehensive computational model of sound transmission through the porcine lung.

    PubMed

    Dai, Zoujun; Peng, Ying; Henry, Brian M; Mansy, Hansen A; Sandler, Richard H; Royston, Thomas J

    2014-09-01

    A comprehensive computational simulation model of sound transmission through the porcine lung is introduced and experimentally evaluated. This "subject-specific" model utilizes parenchymal and major airway geometry derived from x-ray CT images. The lung parenchyma is modeled as a poroviscoelastic material using Biot theory. A finite element (FE) mesh of the lung that includes airway detail is created and used in comsol FE software to simulate the vibroacoustic response of the lung to sound input at the trachea. The FE simulation model is validated by comparing simulation results to experimental measurements using scanning laser Doppler vibrometry on the surface of an excised, preserved lung. The FE model can also be used to calculate and visualize vibroacoustic pressure and motion inside the lung and its airways caused by the acoustic input. The effect of diffuse lung fibrosis and of a local tumor on the lung acoustic response is simulated and visualized using the FE model. In the future, this type of visualization can be compared and matched with experimentally obtained elastographic images to better quantify regional lung material properties to noninvasively diagnose and stage disease and response to treatment.

  3. Pyrolysis Model Development for a Multilayer Floor Covering

    PubMed Central

    McKinnon, Mark B.; Stoliarov, Stanislav I.

    2015-01-01

    Comprehensive pyrolysis models that are integral to computational fire codes have improved significantly over the past decade as the demand for improved predictive capabilities has increased. High fidelity pyrolysis models may improve the design of engineered materials for better fire response, the design of the built environment, and may be used in forensic investigations of fire events. A major limitation to widespread use of comprehensive pyrolysis models is the large number of parameters required to fully define a material and the lack of effective methodologies for measurement of these parameters, especially for complex materials. The work presented here details a methodology used to characterize the pyrolysis of a low-pile carpet tile, an engineered composite material that is common in commercial and institutional occupancies. The studied material includes three distinct layers of varying composition and physical structure. The methodology utilized a comprehensive pyrolysis model (ThermaKin) to conduct inverse analyses on data collected through several experimental techniques. Each layer of the composite was individually parameterized to identify its contribution to the overall response of the composite. The set of properties measured to define the carpet composite were validated against mass loss rate curves collected at conditions outside the range of calibration conditions to demonstrate the predictive capabilities of the model. The mean error between the predicted curve and the mean experimental mass loss rate curve was calculated as approximately 20% on average for heat fluxes ranging from 30 to 70 kW·m−2, which is within the mean experimental uncertainty. PMID:28793556

  4. [Multi-mathematical modelings for compatibility optimization of Jiangzhi granules].

    PubMed

    Yang, Ming; Zhang, Li; Ge, Yingli; Lu, Yanliu; Ji, Guang

    2011-12-01

    To investigate into the method of "multi activity index evaluation and combination optimized of mult-component" for Chinese herbal formulas. According to the scheme of uniform experimental design, efficacy experiment, multi index evaluation, least absolute shrinkage, selection operator (LASSO) modeling, evolutionary optimization algorithm, validation experiment, we optimized the combination of Jiangzhi granules based on the activity indexes of blood serum ALT, ALT, AST, TG, TC, HDL, LDL and TG level of liver tissues, ratio of liver tissue to body. Analytic hierarchy process (AHP) combining with criteria importance through intercriteria correlation (CRITIC) for multi activity index evaluation was more reasonable and objective, it reflected the information of activity index's order and objective sample data. LASSO algorithm modeling could accurately reflect the relationship between different combination of Jiangzhi granule and the activity comprehensive indexes. The optimized combination of Jiangzhi granule showed better values of the activity comprehensive indexed than the original formula after the validation experiment. AHP combining with CRITIC can be used for multi activity index evaluation and LASSO algorithm, it is suitable for combination optimized of Chinese herbal formulas.

  5. Comprehensive evaluation of direct injection mass spectrometry for the quantitative profiling of volatiles in food samples

    PubMed Central

    2016-01-01

    Although qualitative strategies based on direct injection mass spectrometry (DIMS) have recently emerged as an alternative for the rapid classification of food samples, the potential of these approaches in quantitative tasks has scarcely been addressed to date. In this paper, the applicability of different multivariate regression procedures to data collected by DIMS from simulated mixtures has been evaluated. The most relevant factors affecting quantitation, such as random noise, the number of calibration samples, type of validation, mixture complexity and similarity of mass spectra, were also considered and comprehensively discussed. Based on the conclusions drawn from simulated data, and as an example of application, experimental mass spectral fingerprints collected by direct thermal desorption coupled to mass spectrometry were used for the quantitation of major volatiles in Thymus zygis subsp. zygis chemotypes. The results obtained, validated with the direct thermal desorption coupled to gas chromatography–mass spectrometry method here used as a reference, show the potential of DIMS approaches for the fast and precise quantitative profiling of volatiles in foods. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644978

  6. Standardized classroom management program: Social validation and replication studies in Utah and Oregon

    PubMed Central

    Greenwood, Charles R.; Hops, Hyman; Walker, Hill M.; Guild, Jacqueline J.; Stokes, Judith; Young, K. Richard; Keleman, Kenneth S.; Willardson, Marlyn

    1979-01-01

    A comprehensive validation study was conducted of the Program for Academic Survival Skills (PASS), a consultant-based, teacher-mediated program for student classroom behavior. The study addressed questions related to: (a) brief consultant training, (b) subsequent teacher training by consultants using PASS manuals, (c) contrasts between PASS experimental teachers and students and equivalent controls on measures of teacher management skills, student classroom behavior, teacher ratings of student problem behaviors, and academic achievement, (d) reported satisfaction of participants, and (e) replication of effects across two separate school sites. Results indicated that in both sites significant effects were noted in favor of the PASS experimental group for (a) teacher approval, (b) student appropriate classroom behavior, and (c) four categories of student inappropriate behavior. Program satisfaction ratings of students, teachers, and consultants were uniformly positive, and continued use of the program was reported a year later. Discussion focused upon issues of cost-effectiveness, differential site effects, and the relationship between appropriate classroom behavior and academic achievement. PMID:16795604

  7. Standardized classroom management program: Social validation and replication studies in Utah and Oregon.

    PubMed

    Greenwood, C R; Hops, H; Walker, H M; Guild, J J; Stokes, J; Young, K R; Keleman, K S; Willardson, M

    1979-01-01

    A comprehensive validation study was conducted of the Program for Academic Survival Skills (PASS), a consultant-based, teacher-mediated program for student classroom behavior. The study addressed questions related to: (a) brief consultant training, (b) subsequent teacher training by consultants using PASS manuals, (c) contrasts between PASS experimental teachers and students and equivalent controls on measures of teacher management skills, student classroom behavior, teacher ratings of student problem behaviors, and academic achievement, (d) reported satisfaction of participants, and (e) replication of effects across two separate school sites. Results indicated that in both sites significant effects were noted in favor of the PASS experimental group for (a) teacher approval, (b) student appropriate classroom behavior, and (c) four categories of student inappropriate behavior. Program satisfaction ratings of students, teachers, and consultants were uniformly positive, and continued use of the program was reported a year later. Discussion focused upon issues of cost-effectiveness, differential site effects, and the relationship between appropriate classroom behavior and academic achievement.

  8. Modeling and analysis of a resonant nanosystem

    NASA Astrophysics Data System (ADS)

    Calvert, Scott L.

    The majority of investigations into nanoelectromechanical resonators focus on a single area of the resonator's function. This focus varies from the development of a model for a beam's vibration, to the modeling of electrostatic forces, to a qualitative explanation of experimentally-obtained currents. Despite these efforts, there remains a gap between these works, and the level of sophistication needed to truly design nanoresonant systems for efficient commercial use. Towards this end, a comprehensive system model for both a nanobeam resonator and its related experimental setup is proposed. Furthermore, a simulation arrangement is suggested as a method for facilitating the study of the system-level behavior of these devices in a variety of cases that could not be easily obtained experimentally or analytically. The dynamics driving the nanoresonator's motion, as well as the electrical interactions influencing the forcing and output of the system, are modeled, experimentally validated, and studied. The model seeks to develop both a simple circuit representation of the nanoresonator, and to create a mathematical system that can be used to predict and interpret the observed behavior. Due to the assumptions used to simplify the model to a point of reasonable comprehension, the model is most accurate for small beam deflections near the first eigenmode of the beam. The process and results of an experimental investigation are documented, and compared with a circuit simulation modeling the full test system. The comparison qualitatively proves the functionality of the model, while a numerical analysis serves to validate the functionality and setup of the circuit simulation. The use of the simulation enables a much broader investigation of both the electrical behavior and the physical device's dynamics. It is used to complement an assessment of the tuning behavior of the system's linear natural frequency by demonstrating the tuning behavior of the full nonlinear response. The simulation is used to demonstrate the difficulties with the contemporary mixing approach to experimental data collection and to complete a variety of case studies investigating the use of the nanoresonator systems in practical applications, such as signal filtering. Many of these case studies would be difficult to complete analytically, but results are quickly achieved through the use of the simulation.

  9. When all children comprehend: increasing the external validity of narrative comprehension development research

    PubMed Central

    Burris, Silas E.; Brown, Danielle D.

    2014-01-01

    Narratives, also called stories, can be found in conversations, children's play interactions, reading material, and television programs. From infancy to adulthood, narrative comprehension processes interpret events and inform our understanding of physical and social environments. These processes have been extensively studied to ascertain the multifaceted nature of narrative comprehension. From this research we know that three overlapping processes (i.e., knowledge integration, goal structure understanding, and causal inference generation) proposed by the constructionist paradigm are necessary for narrative comprehension, narrative comprehension has a predictive relationship with children's later reading performance, and comprehension processes are generalizable to other contexts. Much of the previous research has emphasized internal and predictive validity; thus, limiting the generalizability of previous findings. We are concerned these limitations may be excluding underrepresented populations from benefits and implications identified by early comprehension processes research. This review identifies gaps in extant literature regarding external validity and argues for increased emphasis on externally valid research. We highlight limited research on narrative comprehension processes in children from low-income and minority populations, and argue for changes in comprehension assessments. Specifically, we argue both on- and off-line assessments should be used across various narrative types (e.g., picture books, televised narratives) with traditionally underserved and underrepresented populations. We propose increasing the generalizability of narrative comprehension processes research can inform persistent reading achievement gaps, and have practical implications for how children learn from narratives. PMID:24659973

  10. Validation of a Plasma-Based Comprehensive Cancer Genotyping Assay Utilizing Orthogonal Tissue- and Plasma-Based Methodologies.

    PubMed

    Odegaard, Justin I; Vincent, John J; Mortimer, Stefanie; Vowles, James V; Ulrich, Bryan C; Banks, Kimberly C; Fairclough, Stephen R; Zill, Oliver A; Sikora, Marcin; Mokhtari, Reza; Abdueva, Diana; Nagy, Rebecca J; Lee, Christine E; Kiedrowski, Lesli A; Paweletz, Cloud P; Eltoukhy, Helmy; Lanman, Richard B; Chudova, Darya I; Talasaz, AmirAli

    2018-04-24

    Purpose: To analytically and clinically validate a circulating cell-free tumor DNA sequencing test for comprehensive tumor genotyping and demonstrate its clinical feasibility. Experimental Design: Analytic validation was conducted according to established principles and guidelines. Blood-to-blood clinical validation comprised blinded external comparison with clinical droplet digital PCR across 222 consecutive biomarker-positive clinical samples. Blood-to-tissue clinical validation comprised comparison of digital sequencing calls to those documented in the medical record of 543 consecutive lung cancer patients. Clinical experience was reported from 10,593 consecutive clinical samples. Results: Digital sequencing technology enabled variant detection down to 0.02% to 0.04% allelic fraction/2.12 copies with ≤0.3%/2.24-2.76 copies 95% limits of detection while maintaining high specificity [prevalence-adjusted positive predictive values (PPV) >98%]. Clinical validation using orthogonal plasma- and tissue-based clinical genotyping across >750 patients demonstrated high accuracy and specificity [positive percent agreement (PPAs) and negative percent agreement (NPAs) >99% and PPVs 92%-100%]. Clinical use in 10,593 advanced adult solid tumor patients demonstrated high feasibility (>99.6% technical success rate) and clinical sensitivity (85.9%), with high potential actionability (16.7% with FDA-approved on-label treatment options; 72.0% with treatment or trial recommendations), particularly in non-small cell lung cancer, where 34.5% of patient samples comprised a directly targetable standard-of-care biomarker. Conclusions: High concordance with orthogonal clinical plasma- and tissue-based genotyping methods supports the clinical accuracy of digital sequencing across all four types of targetable genomic alterations. Digital sequencing's clinical applicability is further supported by high rates of technical success and biomarker target discovery. Clin Cancer Res; 1-11. ©2018 AACR. ©2018 American Association for Cancer Research.

  11. A Low Vision Reading Comprehension Test.

    ERIC Educational Resources Information Center

    Watson, G. R.; And Others

    1996-01-01

    Fifty adults (ages 28-86) with macular degeneration were given the Low Vision Reading Comprehension Assessment (LVRCA) to test its reliability and validity in evaluating the reading comprehension of those with vision impairments. The LVRCA was found to take only nine minutes to administer and was a valid and reliable tool. (CR)

  12. Multiplexed protein measurement: technologies and applications of protein and antibody arrays

    PubMed Central

    Kingsmore, Stephen F.

    2006-01-01

    The ability to measure the abundance of many proteins precisely and simultaneously in experimental samples is an important, recent advance for static and dynamic, as well as descriptive and predictive, biological research. The value of multiplexed protein measurement is being established in applications such as comprehensive proteomic surveys, studies of protein networks and pathways, validation of genomic discoveries and clinical biomarker development. As standards do not yet exist that bridge all of these applications, the current recommended best practice for validation of results is to approach study design in an iterative process and to integrate data from several measurement technologies. This review describes current and emerging multiplexed protein measurement technologies and their applications, and discusses the remaining challenges in this field. PMID:16582876

  13. A logic-based method to build signaling networks and propose experimental plans.

    PubMed

    Rougny, Adrien; Gloaguen, Pauline; Langonné, Nathalie; Reiter, Eric; Crépieux, Pascale; Poupon, Anne; Froidevaux, Christine

    2018-05-18

    With the dramatic increase of the diversity and the sheer quantity of biological data generated, the construction of comprehensive signaling networks that include precise mechanisms cannot be carried out manually anymore. In this context, we propose a logic-based method that allows building large signaling networks automatically. Our method is based on a set of expert rules that make explicit the reasoning made by biologists when interpreting experimental results coming from a wide variety of experiment types. These rules allow formulating all the conclusions that can be inferred from a set of experimental results, and thus building all the possible networks that explain these results. Moreover, given an hypothesis, our system proposes experimental plans to carry out in order to validate or invalidate it. To evaluate the performance of our method, we applied our framework to the reconstruction of the FSHR-induced and the EGFR-induced signaling networks. The FSHR is known to induce the transactivation of the EGFR, but very little is known on the resulting FSH- and EGF-dependent network. We built a single network using data underlying both networks. This leads to a new hypothesis on the activation of MEK by p38MAPK, which we validate experimentally. These preliminary results represent a first step in the demonstration of a cross-talk between these two major MAP kinases pathways.

  14. miRwayDB: a database for experimentally validated microRNA-pathway associations in pathophysiological conditions

    PubMed Central

    Das, Sankha Subhra; Saha, Pritam

    2018-01-01

    Abstract MicroRNAs (miRNAs) are well-known as key regulators of diverse biological pathways. A series of experimental evidences have shown that abnormal miRNA expression profiles are responsible for various pathophysiological conditions by modulating genes in disease associated pathways. In spite of the rapid increase in research data confirming such associations, scientists still do not have access to a consolidated database offering these miRNA-pathway association details for critical diseases. We have developed miRwayDB, a database providing comprehensive information of experimentally validated miRNA-pathway associations in various pathophysiological conditions utilizing data collected from published literature. To the best of our knowledge, it is the first database that provides information about experimentally validated miRNA mediated pathway dysregulation as seen specifically in critical human diseases and hence indicative of a cause-and-effect relationship in most cases. The current version of miRwayDB collects an exhaustive list of miRNA-pathway association entries for 76 critical disease conditions by reviewing 663 published articles. Each database entry contains complete information on the name of the pathophysiological condition, associated miRNA(s), experimental sample type(s), regulation pattern (up/down) of miRNA, pathway association(s), targeted member of dysregulated pathway(s) and a brief description. In addition, miRwayDB provides miRNA, gene and pathway score to evaluate the role of a miRNA regulated pathways in various pathophysiological conditions. The database can also be used for other biomedical approaches such as validation of computational analysis, integrated analysis and prediction of computational model. It also offers a submission page to submit novel data from recently published studies. We believe that miRwayDB will be a useful tool for miRNA research community. Database URL: http://www.mirway.iitkgp.ac.in PMID:29688364

  15. Comprehensive characterization of measurement data gathered by the pressure tube to calandria tube gap probe

    NASA Astrophysics Data System (ADS)

    Shokralla, Shaddy Samir Zaki

    Multi-frequency eddy current measurements are employed in estimating pressure tube (PT) to calandria tube (CT) gap in CANDU fuel channels, a critical inspection activity required to ensure fitness for service of fuel channels. In this thesis, a comprehensive characterization of eddy current gap data is laid out, in order to extract further information on fuel channel condition, and to identify generalized applications for multi-frequency eddy current data. A surface profiling technique, generalizable to multiple probe and conductive material configurations has been developed. This technique has allowed for identification of various pressure tube artefacts, has been independently validated (using ultrasonic measurements), and has been deployed and commissioned at Ontario Power Generation. Dodd and Deeds solutions to the electromagnetic boundary value problem associated with the PT to CT gap probe configuration were experimentally validated for amplitude response to changes in gap. Using the validated Dodd and Deeds solutions, principal components analysis (PCA) has been employed to identify independence and redundancies in multi-frequency eddy current data. This has allowed for an enhanced visualization of factors affecting gap measurement. Results of the PCA of simulation data are consistent with the skin depth equation, and are validated against PCA of physical experiments. Finally, compressed data acquisition has been realized, allowing faster data acquisition for multi-frequency eddy current systems with hardware limitations, and is generalizable to other applications where real time acquisition of large data sets is prohibitive.

  16. Development and validation of a reading-related assessment battery in Malay for the purpose of dyslexia assessment.

    PubMed

    Lee, Lay Wah

    2008-06-01

    Malay is an alphabetic language with transparent orthography. A Malay reading-related assessment battery which was conceptualised based on the International Dyslexia Association definition of dyslexia was developed and validated for the purpose of dyslexia assessment. The battery consisted of ten tests: Letter Naming, Word Reading, Non-word Reading, Spelling, Passage Reading, Reading Comprehension, Listening Comprehension, Elision, Rapid Letter Naming and Digit Span. Content validity was established by expert judgment. Concurrent validity was obtained using the schools' language tests as criterion. Evidence of predictive and construct validity was obtained through regression analyses and factor analyses. Phonological awareness was the most significant predictor of word-level literacy skills in Malay, with rapid naming making independent secondary contributions. Decoding and listening comprehension made separate contributions to reading comprehension, with decoding as the more prominent predictor. Factor analysis revealed four factors: phonological decoding, phonological naming, comprehension and verbal short-term memory. In conclusion, despite differences in orthography, there are striking similarities in the theoretical constructs of reading-related tasks in Malay and in English.

  17. Reassessment of Atomic Mobilities in fcc Cu-Ag-Sn System Aiming at Establishment of an Atomic Mobility Database in Sn-Ag-Cu-In-Sb-Bi-Pb Solder Alloys

    NASA Astrophysics Data System (ADS)

    Xu, Huixia; Zhang, Lijun; Cheng, Kaiming; Chen, Weimin; Du, Yong

    2017-04-01

    To establish an accurate atomic mobility database in solder alloys, a reassessment of atomic mobilities in the fcc (face centered cubic) Cu-Ag-Sn system was performed as reported in the present work. The work entailed initial preparation of three fcc Cu-Sn diffusion couples, which were used to determine the composition-dependent interdiffusivities at 873 K, 923 K, and 973 K, to validate the literature data and provide new experimental data at low temperatures. Then, atomic mobilities in three boundary binaries, fcc Cu-Sn, fcc Ag-Sn, and fcc Cu-Ag, were updated based on the data for various experimental diffusivities obtained from the literature and the present work, together with the available thermodynamic database for solder alloys. Finally, based on the large number of interdiffusivities recently measured from the present authors, atomic mobilities in the fcc Cu-Ag-Sn ternary system were carefully evaluated. A comprehensive comparison between various calculated/model-predicted diffusion properties and the experimental data was used to validate the reliability of the obtained atomic mobilities in ternary fcc Cu-Ag-Sn alloys.

  18. Model-based intensification of a fed-batch microbial process for the maximization of polyhydroxybutyrate (PHB) production rate.

    PubMed

    Penloglou, Giannis; Vasileiadou, Athina; Chatzidoukas, Christos; Kiparissides, Costas

    2017-08-01

    An integrated metabolic-polymerization-macroscopic model, describing the microbial production of polyhydroxybutyrate (PHB) in Azohydromonas lata bacteria, was developed and validated using a comprehensive series of experimental measurements. The model accounted for biomass growth, biopolymer accumulation, carbon and nitrogen sources utilization, oxygen mass transfer and uptake rates and average molecular weights of the accumulated PHB, produced under batch and fed-batch cultivation conditions. Model predictions were in excellent agreement with experimental measurements. The validated model was subsequently utilized to calculate optimal operating conditions and feeding policies for maximizing PHB productivity for desired PHB molecular properties. More specifically, two optimal fed-batch strategies were calculated and experimentally tested: (1) a nitrogen-limited fed-batch policy and (2) a nitrogen sufficient one. The calculated optimal operating policies resulted in a maximum PHB content (94% g/g) in the cultivated bacteria and a biopolymer productivity of 4.2 g/(l h), respectively. Moreover, it was demonstrated that different PHB grades with weight average molecular weights of up to 1513 kg/mol could be produced via the optimal selection of bioprocess operating conditions.

  19. Development and Experimental Validation of Large Eddy Simulation Techniques for the Prediction of Combustion-Dynamic Process in Syngas Combustion: Characterization of Autoignition, Flashback, and Flame-Liftoff at Gas-Turbine Relevant Operating Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ihme, Matthias; Driscoll, James

    2015-08-31

    The objective of this closely coordinated experimental and computational research effort is the development of simulation techniques for the prediction of combustion processes, relevant to the oxidation of syngas and high hydrogen content (HHC) fuels at gas-turbine relevant operating conditions. Specifically, the research goals are (i) the characterization of the sensitivity of syngas ignition processes to hydrodynamic processes and perturbations in temperature and mixture composition in rapid compression machines and ow-reactors and (ii) to conduct comprehensive experimental investigations in a swirl-stabilized gas turbine (GT) combustor under realistic high-pressure operating conditions in order (iii) to obtain fundamental understanding about mechanisms controllingmore » unstable flame regimes in HHC-combustion.« less

  20. Computational Fluid Dynamics Uncertainty Analysis Applied to Heat Transfer over a Flat Plate

    NASA Technical Reports Server (NTRS)

    Groves, Curtis Edward; Ilie, Marcel; Schallhorn, Paul A.

    2013-01-01

    There have been few discussions on using Computational Fluid Dynamics (CFD) without experimental validation. Pairing experimental data, uncertainty analysis, and analytical predictions provides a comprehensive approach to verification and is the current state of the art. With pressed budgets, collecting experimental data is rare or non-existent. This paper investigates and proposes a method to perform CFD uncertainty analysis only from computational data. The method uses current CFD uncertainty techniques coupled with the Student-T distribution to predict the heat transfer coefficient over a at plate. The inputs to the CFD model are varied from a specified tolerance or bias error and the difference in the results are used to estimate the uncertainty. The variation in each input is ranked from least to greatest to determine the order of importance. The results are compared to heat transfer correlations and conclusions drawn about the feasibility of using CFD without experimental data. The results provide a tactic to analytically estimate the uncertainty in a CFD model when experimental data is unavailable

  1. The Construct Validity of Teachers' Perceptions of Change in Schools Implementing Comprehensive School Reform Models

    ERIC Educational Resources Information Center

    Nunnery, John A.; Ross, Steven M.; Bol, Linda

    2008-01-01

    This study reports the results of a validation study of the Comprehensive School Restructuring Teacher Questionnaire (CSRTQ) and the School Observation Measure (SOM), which are intended for use in evaluating comprehensive school reform efforts. The CSRTQ, which putatively measures five factors related to school restructuring (internal focus,…

  2. Assessing reading comprehension with narrative and expository texts: Dimensionality and relationship with fluency, vocabulary and memory.

    PubMed

    Santos, Sandra; Cadime, Irene; Viana, Fernanda L; Chaves-Sousa, Séli; Gayo, Elena; Maia, José; Ribeiro, Iolanda

    2017-02-01

    Reading comprehension assessment should rely on valid instruments that enable adequate conclusions to be taken regarding students' reading comprehension performance. In this article, two studies were conducted to collect validity evidence for the vertically scaled forms of two Tests of Reading Comprehension for Portuguese elementary school students in the second to fourth grades, one with narrative texts (TRC-n) and another with expository ones (TRC-e). Two samples of 950 and 990 students participated in Study 1, the study of the dimensionality of the TRC-n and TRC-e forms, respectively. Confirmatory factor analyses provided evidence of an acceptable fit for the one-factor solution for all test forms. Study 2 included 218 students to collect criterion-related validity. The scores obtained in each of the test forms were significantly correlated with the ones obtained in other reading comprehension measures and with the results obtained in oral reading fluency, vocabulary and working memory tests. Evidence suggests that the test forms are valid measures of reading comprehension. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  3. Methods and Practices of Investigators for Determining Participants’ Decisional Capacity and Comprehension of Protocols

    PubMed Central

    Kon, Alexander A.; Klug, Michael

    2010-01-01

    Ethicists recommend that investigators assess subjects’ comprehension prior to accepting their consent as valid. Because children represent an at-risk population, ensuring adequate comprehension in pediatric research is vital. We surveyed all corresponding authors of research articles published over a six-month period in five leading adult and pediatric journals. Our goal was to assess how often subject’s comprehension or decisional capacity was assessed in the consent process, whether there was any difference between adult and pediatric research projects, and the rate at which investigators use formal or validated tools to assess capacity. Responses from 102 authors were analyzed (response rate 56%). Approximately two-thirds of respondents stated that they assessed comprehension or decisional capacity prior to accepting consent, and we found no difference between adult and pediatric researchers. Nine investigators used a formal questionnaire, and three used a validated tool. These findings suggest that fewer than expected investigators assess comprehension and decisional capacity, and that the use of standardized and validated tools is the exception rather than the rule. PMID:19385838

  4. Knowledge Activation, Integration, and Validation during Narrative Text Comprehension

    ERIC Educational Resources Information Center

    Cook, Anne E.; O'Brien, Edward J.

    2014-01-01

    Previous text comprehension studies using the contradiction paradigm primarily tested assumptions of the activation mechanism involved in reading. However, the nature of the contradiction in such studies relied on validation of information in readers' general world knowledge. We directly tested this validation process by varying the strength of…

  5. The impact of home care nurses' numeracy and graph literacy on comprehension of visual display information: implications for dashboard design.

    PubMed

    Dowding, Dawn; Merrill, Jacqueline A; Onorato, Nicole; Barrón, Yolanda; Rosati, Robert J; Russell, David

    2018-02-01

    To explore home care nurses' numeracy and graph literacy and their relationship to comprehension of visualized data. A multifactorial experimental design using online survey software. Nurses were recruited from 2 Medicare-certified home health agencies. Numeracy and graph literacy were measured using validated scales. Nurses were randomized to 1 of 4 experimental conditions. Each condition displayed data for 1 of 4 quality indicators, in 1 of 4 different visualized formats (bar graph, line graph, spider graph, table). A mixed linear model measured the impact of numeracy, graph literacy, and display format on data understanding. In all, 195 nurses took part in the study. They were slightly more numerate and graph literate than the general population. Overall, nurses understood information presented in bar graphs most easily (88% correct), followed by tables (81% correct), line graphs (77% correct), and spider graphs (41% correct). Individuals with low numeracy and low graph literacy had poorer comprehension of information displayed across all formats. High graph literacy appeared to enhance comprehension of data regardless of numeracy capabilities. Clinical dashboards are increasingly used to provide information to clinicians in visualized format, under the assumption that visual display reduces cognitive workload. Results of this study suggest that nurses' comprehension of visualized information is influenced by their numeracy, graph literacy, and the display format of the data. Individual differences in numeracy and graph literacy skills need to be taken into account when designing dashboard technology. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  6. A novel left heart simulator for the multi-modality characterization of native mitral valve geometry and fluid mechanics.

    PubMed

    Rabbah, Jean-Pierre; Saikrishnan, Neelakantan; Yoganathan, Ajit P

    2013-02-01

    Numerical models of the mitral valve have been used to elucidate mitral valve function and mechanics. These models have evolved from simple two-dimensional approximations to complex three-dimensional fully coupled fluid structure interaction models. However, to date these models lack direct one-to-one experimental validation. As computational solvers vary considerably, experimental benchmark data are critically important to ensure model accuracy. In this study, a novel left heart simulator was designed specifically for the validation of numerical mitral valve models. Several distinct experimental techniques were collectively performed to resolve mitral valve geometry and hemodynamics. In particular, micro-computed tomography was used to obtain accurate and high-resolution (39 μm voxel) native valvular anatomy, which included the mitral leaflets, chordae tendinae, and papillary muscles. Three-dimensional echocardiography was used to obtain systolic leaflet geometry. Stereoscopic digital particle image velocimetry provided all three components of fluid velocity through the mitral valve, resolved every 25 ms in the cardiac cycle. A strong central filling jet (V ~ 0.6 m/s) was observed during peak systole with minimal out-of-plane velocities. In addition, physiologic hemodynamic boundary conditions were defined and all data were synchronously acquired through a central trigger. Finally, the simulator is a precisely controlled environment, in which flow conditions and geometry can be systematically prescribed and resultant valvular function and hemodynamics assessed. Thus, this work represents the first comprehensive database of high fidelity experimental data, critical for extensive validation of mitral valve fluid structure interaction simulations.

  7. A Novel Left Heart Simulator for the Multi-modality Characterization of Native Mitral Valve Geometry and Fluid Mechanics

    PubMed Central

    Rabbah, Jean-Pierre; Saikrishnan, Neelakantan; Yoganathan, Ajit P.

    2012-01-01

    Numerical models of the mitral valve have been used to elucidate mitral valve function and mechanics. These models have evolved from simple two-dimensional approximations to complex three-dimensional fully coupled fluid structure interaction models. However, to date these models lack direct one-to-one experimental validation. As computational solvers vary considerably, experimental benchmark data are critically important to ensure model accuracy. In this study, a novel left heart simulator was designed specifically for the validation of numerical mitral valve models. Several distinct experimental techniques were collectively performed to resolve mitral valve geometry and hemodynamics. In particular, micro-computed tomography was used to obtain accurate and high-resolution (39 µm voxel) native valvular anatomy, which included the mitral leaflets, chordae tendinae, and papillary muscles. Threedimensional echocardiography was used to obtain systolic leaflet geometry for direct comparison of resultant leaflet kinematics. Stereoscopic digital particle image velocimetry provided all three components of fluid velocity through the mitral valve, resolved every 25 ms in the cardiac cycle. A strong central filling jet was observed during peak systole, with minimal out-of-plane velocities (V~0.6m/s). In addition, physiologic hemodynamic boundary conditions were defined and all data were synchronously acquired through a central trigger. Finally, the simulator is a precisely controlled environment, in which flow conditions and geometry can be systematically prescribed and resultant valvular function and hemodynamics assessed. Thus, these data represent the first comprehensive database of high fidelity experimental data, critical for extensive validation of mitral valve fluid structure interaction simulations. PMID:22965640

  8. MASH Suite Pro: A Comprehensive Software Tool for Top-Down Proteomics*

    PubMed Central

    Cai, Wenxuan; Guner, Huseyin; Gregorich, Zachery R.; Chen, Albert J.; Ayaz-Guner, Serife; Peng, Ying; Valeja, Santosh G.; Liu, Xiaowen; Ge, Ying

    2016-01-01

    Top-down mass spectrometry (MS)-based proteomics is arguably a disruptive technology for the comprehensive analysis of all proteoforms arising from genetic variation, alternative splicing, and posttranslational modifications (PTMs). However, the complexity of top-down high-resolution mass spectra presents a significant challenge for data analysis. In contrast to the well-developed software packages available for data analysis in bottom-up proteomics, the data analysis tools in top-down proteomics remain underdeveloped. Moreover, despite recent efforts to develop algorithms and tools for the deconvolution of top-down high-resolution mass spectra and the identification of proteins from complex mixtures, a multifunctional software platform, which allows for the identification, quantitation, and characterization of proteoforms with visual validation, is still lacking. Herein, we have developed MASH Suite Pro, a comprehensive software tool for top-down proteomics with multifaceted functionality. MASH Suite Pro is capable of processing high-resolution MS and tandem MS (MS/MS) data using two deconvolution algorithms to optimize protein identification results. In addition, MASH Suite Pro allows for the characterization of PTMs and sequence variations, as well as the relative quantitation of multiple proteoforms in different experimental conditions. The program also provides visualization components for validation and correction of the computational outputs. Furthermore, MASH Suite Pro facilitates data reporting and presentation via direct output of the graphics. Thus, MASH Suite Pro significantly simplifies and speeds up the interpretation of high-resolution top-down proteomics data by integrating tools for protein identification, quantitation, characterization, and visual validation into a customizable and user-friendly interface. We envision that MASH Suite Pro will play an integral role in advancing the burgeoning field of top-down proteomics. PMID:26598644

  9. Validation in the clinical process: four settings for objectification of the subjectivity of understanding.

    PubMed

    Beland, H

    1994-12-01

    Clinical material is presented for discussion with the aim of exemplifying the author's conceptions of validation in a number of sessions and in psychoanalytic research and of making them verifiable, susceptible to consensus and/or falsifiable. Since Freud's postscript to the Dora case, the first clinical validation in the history of psychoanalysis, validation has been group-related and society-related, that is to say, it combines the evidence of subjectivity with the consensus of the research community (the scientific community). Validation verifies the conformity of the unconscious transference meaning with the analyst's understanding. The deciding criterion is the patient's reaction to the interpretation. In terms of the theory of science, validation in the clinical process corresponds to experimental testing of truth in the sphere of inanimate nature. Four settings of validation can be distinguished: the analyst's self-supervision during the process of understanding, which goes from incomprehension to comprehension (container-contained, PS-->D, selected fact); the patient's reaction to the interpretation (insight) and the analyst's assessment of the reaction; supervision and second thoughts; and discussion in groups and publications leading to consensus. It is a peculiarity of psychoanalytic research that in the event of positive validation the three criteria of truth (evidence, consensus and utility) coincide.

  10. European validation of The Comprehensive International Classification of Functioning, Disability and Health Core Set for Osteoarthritis from the perspective of patients with osteoarthritis of the knee or hip.

    PubMed

    Weigl, Martin; Wild, Heike

    2017-09-15

    To validate the International Classification of Functioning, Disability and Health Comprehensive Core Set for Osteoarthritis from the patient perspective in Europe. This multicenter cross-sectional study involved 375 patients with knee or hip osteoarthritis. Trained health professionals completed the Comprehensive Core Set, and patients completed the Short-Form 36 questionnaire. Content validity was evaluated by calculating prevalences of impairments in body function and structures, limitations in activities and participation and environmental factors, which were either barriers or facilitators. Convergent construct validity was evaluated by correlating the International Classification of Functioning, Disability and Health categories with the Short-Form 36 Physical Component Score and the SF-36 Mental Component Score in a subgroup of 259 patients. The prevalences of all body function, body structure and activities and participation categories were >40%, >32% and >20%, respectively, and all environmental factors were relevant for >16% of patients. Few categories showed relevant differences between knee and hip osteoarthritis. All body function categories and all but two activities and participation categories showed significant correlations with the Physical Component Score. Body functions from the ICF chapter Mental Functions showed higher correlations with the Mental Component Score than with the Physical Component Score. This study supports the validity of the International Classification of Functioning, Disability and Health Comprehensive Core Set for Osteoarthritis. Implications for Rehabilitation Comprehensive International Classification of Functioning, Disability and Health Core Sets were developed as practical tools for application in multidisciplinary assessments. The validity of the Comprehensive International Classification of Functioning, Disability and Health Core Set for Osteoarthritis in this study supports its application in European patients with osteoarthritis. The differences in results between this Europe validation study and a previous Singaporean validation study underscore the need to validate the International Classification of Functioning, Disability and Health Core Sets in different regions of the world.

  11. Evaluating the Predictive Validity of the Computerized Comprehension Task: Comprehension Predicts Production

    PubMed Central

    Friend, Margaret; Schmitt, Sara A.; Simpson, Adrianne M.

    2017-01-01

    Until recently, the challenges inherent in measuring comprehension have impeded our ability to predict the course of language acquisition. The present research reports on a longitudinal assessment of the convergent and predictive validity of the CDI: Words and Gestures and the Computerized Comprehension Task (CCT). The CDI: WG and the CCT evinced good convergent validity however the CCT better predicted subsequent parent reports of language production. Language sample data in the third year confirm this finding: the CCT accounted for 24% of the variance in unique word use. These studies provide evidence for the utility of a behavior-based approach to predicting the course of language acquisition into production. PMID:21928878

  12. Comprehensive model of a hermetic reciprocating compressor

    NASA Astrophysics Data System (ADS)

    Yang, B.; Ziviani, D.; Groll, E. A.

    2017-08-01

    A comprehensive simulation model is presented to predict the performance of a hermetic reciprocating compressor and to reveal the underlying mechanisms when the compressor is running. The presented model is composed of sub-models simulating the in-cylinder compression process, piston ring/journal bearing frictional power loss, single phase induction motor and the overall compressor energy balance among different compressor components. The valve model, leakage through piston ring model and in-cylinder heat transfer model are also incorporated into the in-cylinder compression process model. A numerical algorithm solving the model is introduced. The predicted results of the compressor mass flow rate and input power consumption are compared to the published compressor map values. Future work will focus on detailed experimental validation of the model and parametric studies investigating the effects of structural parameters, including the stroke-to-bore ratio, on the compressor performance.

  13. Content Validation of the Comprehension of Written Grammar Assessment for Deaf and Hard of Hearing Students

    ERIC Educational Resources Information Center

    Cannon, Joanna E.; Hubley, Anita M.

    2014-01-01

    Content validation is a crucial, but often neglected, component of good test development. In the present study, content validity evidence was collected to determine the degree to which elements (e.g., grammatical structures, items, picture responses, administration, and scoring instructions) of the Comprehension of Written Grammar (CWG) test are…

  14. Mixing characterization of highly underexpanded fluid jets with real gas expansion

    NASA Astrophysics Data System (ADS)

    Förster, Felix J.; Baab, Steffen; Steinhausen, Christoph; Lamanna, Grazia; Ewart, Paul; Weigand, Bernhard

    2018-03-01

    We report a comprehensive speed of sound database for multi-component mixing of underexpanded fuel jets with real gas expansion. The paper presents several reference test cases with well-defined experimental conditions providing quantitative data for validation of computational simulations. Two injectant fluids, fundamentally different with respect to their critical properties, are brought to supercritical state and discharged into cold nitrogen at different pressures. The database features a wide range of nozzle pressure ratios covering the regimes that are generally classified as highly and extremely highly underexpanded jets. Further variation is introduced by investigating different injection temperatures. Measurements are obtained along the centerline at different axial positions. In addition, an adiabatic mixing model based on non-ideal thermodynamic mixture properties is used to extract mixture compositions from the experimental speed of sound data. The concentration data obtained are complemented by existing experimental data and represented by an empirical fit.

  15. Spherical harmonics coefficients for ligand-based virtual screening of cyclooxygenase inhibitors.

    PubMed

    Wang, Quan; Birod, Kerstin; Angioni, Carlo; Grösch, Sabine; Geppert, Tim; Schneider, Petra; Rupp, Matthias; Schneider, Gisbert

    2011-01-01

    Molecular descriptors are essential for many applications in computational chemistry, such as ligand-based similarity searching. Spherical harmonics have previously been suggested as comprehensive descriptors of molecular structure and properties. We investigate a spherical harmonics descriptor for shape-based virtual screening. We introduce and validate a partially rotation-invariant three-dimensional molecular shape descriptor based on the norm of spherical harmonics expansion coefficients. Using this molecular representation, we parameterize molecular surfaces, i.e., isosurfaces of spatial molecular property distributions. We validate the shape descriptor in a comprehensive retrospective virtual screening experiment. In a prospective study, we virtually screen a large compound library for cyclooxygenase inhibitors, using a self-organizing map as a pre-filter and the shape descriptor for candidate prioritization. 12 compounds were tested in vitro for direct enzyme inhibition and in a whole blood assay. Active compounds containing a triazole scaffold were identified as direct cyclooxygenase-1 inhibitors. This outcome corroborates the usefulness of spherical harmonics for representation of molecular shape in virtual screening of large compound collections. The combination of pharmacophore and shape-based filtering of screening candidates proved to be a straightforward approach to finding novel bioactive chemotypes with minimal experimental effort.

  16. Content validity of the Comprehensive ICF Core Set for multiple sclerosis from the perspective of speech and language therapists.

    PubMed

    Renom, Marta; Conrad, Andrea; Bascuñana, Helena; Cieza, Alarcos; Galán, Ingrid; Kesselring, Jürg; Coenen, Michaela

    2014-11-01

    The Comprehensive International Classification of Functioning, Disability and Health (ICF) Core Set for Multiple Sclerosis (MS) is a comprehensive framework to structure the information obtained in multidisciplinary clinical settings according to the biopsychosocial perspective of the International Classification of Functioning, Disability and Health (ICF) and to guide the treatment and rehabilitation process accordingly. It is now undergoing validation from the user perspective for which it has been developed in the first place. To validate the content of the Comprehensive ICF Core Set for MS from the perspective of speech and language therapists (SLTs) involved in the treatment of persons with MS (PwMS). Within a three-round e-mail-based Delphi Study 34 SLTs were asked about PwMS' problems, resources and aspects of the environment treated by SLTs. Responses were linked to ICF categories. Identified ICF categories were compared with those included in the Comprehensive ICF Core Set for MS to examine its content validity. Thirty-four SLTs named 524 problems and resources, as well as aspects of environment. Statements were linked to 129 ICF categories (60 Body-functions categories, two Body-structures categories, 42 Activities-&-participation categories, and 25 Environmental-factors categories). SLTs confirmed 46 categories in the Comprehensive ICF Core Set. Twenty-one ICF categories were identified as not-yet-included categories. This study contributes to the content validity of the Comprehensive ICF Core Set for MS from the perspective of SLTs. Study participants agreed on a few not-yet-included categories that should be further discussed for inclusion in a revised version of the Comprehensive ICF Core Set to strengthen SLTs' perspective in PwMS' neurorehabilitation. © 2014 Royal College of Speech and Language Therapists.

  17. Comprehensive heat transfer correlation for water/ethylene glycol-based graphene (nitrogen-doped graphene) nanofluids derived by artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS)

    NASA Astrophysics Data System (ADS)

    Savari, Maryam; Moghaddam, Amin Hedayati; Amiri, Ahmad; Shanbedi, Mehdi; Ayub, Mohamad Nizam Bin

    2017-10-01

    Herein, artificial neural network and adaptive neuro-fuzzy inference system are employed for modeling the effects of important parameters on heat transfer and fluid flow characteristics of a car radiator and followed by comparing with those of the experimental results for testing data. To this end, two novel nanofluids (water/ethylene glycol-based graphene and nitrogen-doped graphene nanofluids) were experimentally synthesized. Then, Nusselt number was modeled with respect to the variation of inlet temperature, Reynolds number, Prandtl number and concentration, which were defined as the input (design) variables. To reach reliable results, we divided these data into train and test sections to accomplish modeling. Artificial networks were instructed by a major part of experimental data. The other part of primary data which had been considered for testing the appropriateness of the models was entered into artificial network models. Finally, predictad results were compared to the experimental data to evaluate validity. Confronted with high-level of validity confirmed that the proposed modeling procedure by BPNN with one hidden layer and five neurons is efficient and it can be expanded for all water/ethylene glycol-based carbon nanostructures nanofluids. Finally, we expanded our data collection from model and could present a fundamental correlation for calculating Nusselt number of the water/ethylene glycol-based nanofluids including graphene or nitrogen-doped graphene.

  18. Anesthetics and analgesics in experimental traumatic brain injury: Selection based on experimental objectives

    PubMed Central

    Rowe, Rachel K.; Harrison, Jordan L.; Thomas, Theresa C.; Pauly, James R.; Adelson, P. David; Lifshitz, Jonathan

    2013-01-01

    The use of animal modeling in traumatic brain injury (TBI) research is justified by the lack of sufficiently comprehensive in vitro and computer modeling that incorporates all components of the neurovascular unit. Valid animal modeling of TBI requires accurate replication of both the mechanical forces and secondary injury conditions observed in human patients. Regulatory requirements for animal modeling emphasize the administration of appropriate anesthetics and analgesics unless withholding these drugs is scientifically justified. The objective of this review is to present scientific justification for standardizing the use of anesthetics and analgesics, within a study, when modeling TBI in order to preserve study validity. Evidence for the interference of anesthetics and analgesics in the natural course of brain injury calls for consistent consideration of pain management regimens when conducting TBI research. Anesthetics administered at the time of or shortly after induction of brain injury can alter cognitive, motor, and histological outcomes following TBI. A consistent anesthesia protocol based on experimental objectives within each individual study is imperative when conducting TBI studies to control for the confounding effects of anesthesia on outcome parameters. Experimental studies that replicate the clinical condition are essential to gain further understanding and evaluate possible treatments for TBI. However, with animal models of TBI it is essential that investigators assure a uniform drug delivery protocol that minimizes confounding variables, while minimizing pain and suffering. PMID:23877609

  19. The early maximum likelihood estimation model of audiovisual integration in speech perception.

    PubMed

    Andersen, Tobias S

    2015-05-01

    Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk-MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures favored more complex models. This difference between conventional error measures and cross-validation was found to be indicative of over-fitting in more complex models such as the FLMP.

  20. Validated MicroRNA Target Databases: An Evaluation.

    PubMed

    Lee, Yun Ji Diana; Kim, Veronica; Muth, Dillon C; Witwer, Kenneth W

    2015-11-01

    Preclinical Research Positive findings from preclinical and clinical studies involving depletion or supplementation of microRNA (miRNA) engender optimism about miRNA-based therapeutics. However, off-target effects must be considered. Predicting these effects is complicated. Each miRNA may target many gene transcripts, and the rules governing imperfectly complementary miRNA: target interactions are incompletely understood. Several databases provide lists of the relatively small number of experimentally confirmed miRNA: target pairs. Although incomplete, this information might allow assessment of at least some of the off-target effects. We evaluated the performance of four databases of experimentally validated miRNA: target interactions (miRWalk 2.0, miRTarBase, miRecords, and TarBase 7.0) using a list of 50 alphabetically consecutive genes. We examined the provided citations to determine the degree to which each interaction was experimentally supported. To assess stability, we tested at the beginning and end of a five-month period. Results varied widely by database. Two of the databases changed significantly over the course of 5 months. Most reported evidence for miRNA: target interactions were indirect or otherwise weak, and relatively few interactions were supported by more than one publication. Some returned results appear to arise from simplistic text searches that offer no insight into the relationship of the search terms, may not even include the reported gene or miRNA, and may thus, be invalid. We conclude that validation databases provide important information, but not all information in all extant databases is up-to-date or accurate. Nevertheless, the more comprehensive validation databases may provide useful starting points for investigation of off-target effects of proposed small RNA therapies. © 2015 Wiley Periodicals, Inc.

  1. Identification of immunoglobulins using Chou's pseudo amino acid composition with feature selection technique.

    PubMed

    Tang, Hua; Chen, Wei; Lin, Hao

    2016-04-01

    Immunoglobulins, also called antibodies, are a group of cell surface proteins which are produced by the immune system in response to the presence of a foreign substance (called antigen). They play key roles in many medical, diagnostic and biotechnological applications. Correct identification of immunoglobulins is crucial to the comprehension of humoral immune function. With the avalanche of protein sequences identified in postgenomic age, it is highly desirable to develop computational methods to timely identify immunoglobulins. In view of this, we designed a predictor called "IGPred" by formulating protein sequences with the pseudo amino acid composition into which nine physiochemical properties of amino acids were incorporated. Jackknife cross-validated results showed that 96.3% of immunoglobulins and 97.5% of non-immunoglobulins can be correctly predicted, indicating that IGPred holds very high potential to become a useful tool for antibody analysis. For the convenience of most experimental scientists, a web-server for IGPred was established at http://lin.uestc.edu.cn/server/IGPred. We believe that the web-server will become a powerful tool to study immunoglobulins and to guide related experimental validations.

  2. TH-CD-207A-08: Simulated Real-Time Image Guidance for Lung SBRT Patients Using Scatter Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Redler, G; Cifter, G; Templeton, A

    2016-06-15

    Purpose: To develop a comprehensive Monte Carlo-based model for the acquisition of scatter images of patient anatomy in real-time, during lung SBRT treatment. Methods: During SBRT treatment, images of patient anatomy can be acquired from scattered radiation. To rigorously examine the utility of scatter images for image guidance, a model is developed using MCNP code to simulate scatter images of phantoms and lung cancer patients. The model is validated by comparing experimental and simulated images of phantoms of different complexity. The differentiation between tissue types is investigated by imaging objects of known compositions (water, lung, and bone equivalent). A lungmore » tumor phantom, simulating materials and geometry encountered during lung SBRT treatments, is used to investigate image noise properties for various quantities of delivered radiation (monitor units(MU)). Patient scatter images are simulated using the validated simulation model. 4DCT patient data is converted to an MCNP input geometry accounting for different tissue composition and densities. Lung tumor phantom images acquired with decreasing imaging time (decreasing MU) are used to model the expected noise amplitude in patient scatter images, producing realistic simulated patient scatter images with varying temporal resolution. Results: Image intensity in simulated and experimental scatter images of tissue equivalent objects (water, lung, bone) match within the uncertainty (∼3%). Lung tumor phantom images agree as well. Specifically, tumor-to-lung contrast matches within the uncertainty. The addition of random noise approximating quantum noise in experimental images to simulated patient images shows that scatter images of lung tumors can provide images in as fast as 0.5 seconds with CNR∼2.7. Conclusions: A scatter imaging simulation model is developed and validated using experimental phantom scatter images. Following validation, lung cancer patient scatter images are simulated. These simulated patient images demonstrate the clinical utility of scatter imaging for real-time tumor tracking during lung SBRT.« less

  3. The Validity of Individual Rorschach Variables: Systematic Reviews and Meta-Analyses of the Comprehensive System

    ERIC Educational Resources Information Center

    Mihura, Joni L.; Meyer, Gregory J.; Dumitrascu, Nicolae; Bombel, George

    2013-01-01

    We systematically evaluated the peer-reviewed Rorschach validity literature for the 65 main variables in the popular Comprehensive System (CS). Across 53 meta-analyses examining variables against externally assessed criteria (e.g., observer ratings, psychiatric diagnosis), the mean validity was r = 0.27 (k = 770) as compared to r = 0.08 (k = 386)…

  4. Experimental verification of a thermal equivalent circuit dynamic model on an extended range electric vehicle battery pack

    NASA Astrophysics Data System (ADS)

    Ramotar, Lokendra; Rohrauer, Greg L.; Filion, Ryan; MacDonald, Kathryn

    2017-03-01

    The development of a dynamic thermal battery model for hybrid and electric vehicles is realized. A thermal equivalent circuit model is created which aims to capture and understand the heat propagation from the cells through the entire pack and to the environment using a production vehicle battery pack for model validation. The inclusion of production hardware and the liquid battery thermal management system components into the model considers physical and geometric properties to calculate thermal resistances of components (conduction, convection and radiation) along with their associated heat capacity. Various heat sources/sinks comprise the remaining model elements. Analog equivalent circuit simulations using PSpice are compared to experimental results to validate internal temperature nodes and heat rates measured through various elements, which are then employed to refine the model further. Agreement with experimental results indicates the proposed method allows for a comprehensive real-time battery pack analysis at little computational expense when compared to other types of computer based simulations. Elevated road and ambient conditions in Mesa, Arizona are simulated on a parked vehicle with varying quiescent cooling rates to examine the effect on the diurnal battery temperature for longer term static exposure. A typical daily driving schedule is also simulated and examined.

  5. APID interactomes: providing proteome-based interactomes with controlled quality for multiple species and derived networks

    PubMed Central

    Alonso-López, Diego; Gutiérrez, Miguel A.; Lopes, Katia P.; Prieto, Carlos; Santamaría, Rodrigo; De Las Rivas, Javier

    2016-01-01

    APID (Agile Protein Interactomes DataServer) is an interactive web server that provides unified generation and delivery of protein interactomes mapped to their respective proteomes. This resource is a new, fully redesigned server that includes a comprehensive collection of protein interactomes for more than 400 organisms (25 of which include more than 500 interactions) produced by the integration of only experimentally validated protein–protein physical interactions. For each protein–protein interaction (PPI) the server includes currently reported information about its experimental validation to allow selection and filtering at different quality levels. As a whole, it provides easy access to the interactomes from specific species and includes a global uniform compendium of 90,379 distinct proteins and 678,441 singular interactions. APID integrates and unifies PPIs from major primary databases of molecular interactions, from other specific repositories and also from experimentally resolved 3D structures of protein complexes where more than two proteins were identified. For this purpose, a collection of 8,388 structures were analyzed to identify specific PPIs. APID also includes a new graph tool (based on Cytoscape.js) for visualization and interactive analyses of PPI networks. The server does not require registration and it is freely available for use at http://apid.dep.usal.es. PMID:27131791

  6. Examining the validity of self-reports on scales measuring students' strategic processing.

    PubMed

    Samuelstuen, Marit S; Bråten, Ivar

    2007-06-01

    Self-report inventories trying to measure strategic processing at a global level have been much used in both basic and applied research. However, the validity of global strategy scores is open to question because such inventories assess strategy perceptions outside the context of specific task performance. The primary aim was to examine the criterion-related and construct validity of the global strategy data obtained with the Cross-Curricular Competencies (CCC) scale. Additionally, we wanted to compare the validity of these data with the validity of data obtained with a task-specific self-report inventory focusing on the same types of strategies. The sample included 269 10th-grade students from 12 different junior high schools. Global strategy use as assessed with the CCC was compared with task-specific strategy use reported in three different reading situations. Moreover, relationships between scores on the CCC and scores on measures of text comprehension were examined and compared with relationships between scores on the task-specific strategy measure and the same comprehension measures. The comparison between the CCC strategy scores and the task-specific strategy scores suggested only modest criterion-related validity for the data obtained with the global strategy inventory. The CCC strategy scores were also not related to the text comprehension measures, indicating poor construct validity. In contrast, the task-specific strategy scores were positively related to the comprehension measures, indicating good construct validity. Attempts to measure strategic processing at a global level seem to have limited validity and utility.

  7. Bio-Optical Measurement and Modeling of the California Current and Polar Oceans

    NASA Technical Reports Server (NTRS)

    Mitchell, B. Greg; Fargion, Giulietta S. (Technical Monitor)

    2001-01-01

    The principal goals of our research are to validate standard or experimental products through detailed bio-optical and biogeochemical measurements, and to combine ocean optical observations with advanced radiative transfer modeling to contribute to satellite vicarious radiometric calibration and advanced algorithm development. To achieve our goals requires continued efforts to execute complex field programs globally, as well as development of advanced ocean optical measurement protocols. We completed a comprehensive set of ocean optical observations in the California Current, Southern Ocean, Indian Ocean requiring a large commitment to instrument calibration, measurement protocols, data processing and data merger. We augmented separately funded projects of our own, as well as others, to acquire ill situ data sets we have collected on various global cruises supported by separate grants or contracts. In collaboration with major oceanographic ship-based observation programs funded by various agencies (CalCOFI, US JGOFS, NOAA AMLR, INDOEX and Japan/East Sea) our SIMBIOS effort has resulted in data from diverse bio-optical provinces. For these global deployments we generate a high-quality, methodologically consistent, data set encompassing a wide-range of oceanic conditions. Global data collected in recent years have been integrated with our on-going CalCOFI database and have been used to evaluate SeaWiFS algorithms and to carry out validation studies. The combined database we have assembled now comprises more than 700 stations and includes observations for the clearest oligotrophic waters, highly eutrophic blooms, red-tides and coastal case 2 conditions. The data has been used to validate water-leaving radiance estimated with SeaWiFS as well as bio-optical algorithms for chlorophyll pigments. The comprehensive data is utilized for development of experimental algorithms (e.g. high-low latitude pigment transition, phytoplankton absorption, and cDOM). During this period we completed 9 peer-reviewed publications in high quality journals, and presented aspects of our work at more than 10 scientific conferences.

  8. Quality evaluation of health information system's architectures developed using the HIS-DF methodology.

    PubMed

    López, Diego M; Blobel, Bernd; Gonzalez, Carolina

    2010-01-01

    Requirement analysis, design, implementation, evaluation, use, and maintenance of semantically interoperable Health Information Systems (HIS) have to be based on eHealth standards. HIS-DF is a comprehensive approach for HIS architectural development based on standard information models and vocabulary. The empirical validity of HIS-DF has not been demonstrated so far. Through an empirical experiment, the paper demonstrates that using HIS-DF and HL7 information models, semantic quality of HIS architecture can be improved, compared to architectures developed using traditional RUP process. Semantic quality of the architecture has been measured in terms of model's completeness and validity metrics. The experimental results demonstrated an increased completeness of 14.38% and an increased validity of 16.63% when using the HIS-DF and HL7 information models in a sample HIS development project. Quality assurance of the system architecture in earlier stages of HIS development presumes an increased quality of final HIS systems, which supposes an indirect impact on patient care.

  9. Comprehension of Written Grammar Test: Reliability and Known-Groups Validity Study with Hearing and Deaf and Hard-of-Hearing Students

    ERIC Educational Resources Information Center

    Cannon, Joanna E.; Hubley, Anita M.; Millhoff, Courtney; Mazlouman, Shahla

    2016-01-01

    The aim of the current study was to gather validation evidence for the "Comprehension of Written Grammar" (CWG; Easterbrooks, 2010) receptive test of 26 grammatical structures of English print for use with children who are deaf and hard of hearing (DHH). Reliability and validity data were collected for 98 participants (49 DHH and 49…

  10. Theory and experiments in model-based space system anomaly management

    NASA Astrophysics Data System (ADS)

    Kitts, Christopher Adam

    This research program consists of an experimental study of model-based reasoning methods for detecting, diagnosing and resolving anomalies that occur when operating a comprehensive space system. Using a first principles approach, several extensions were made to the existing field of model-based fault detection and diagnosis in order to develop a general theory of model-based anomaly management. Based on this theory, a suite of algorithms were developed and computationally implemented in order to detect, diagnose and identify resolutions for anomalous conditions occurring within an engineering system. The theory and software suite were experimentally verified and validated in the context of a simple but comprehensive, student-developed, end-to-end space system, which was developed specifically to support such demonstrations. This space system consisted of the Sapphire microsatellite which was launched in 2001, several geographically distributed and Internet-enabled communication ground stations, and a centralized mission control complex located in the Space Technology Center in the NASA Ames Research Park. Results of both ground-based and on-board experiments demonstrate the speed, accuracy, and value of the algorithms compared to human operators, and they highlight future improvements required to mature this technology.

  11. Comprehensive Quantitative Analysis on Privacy Leak Behavior

    PubMed Central

    Fan, Lejun; Wang, Yuanzhuo; Jin, Xiaolong; Li, Jingyuan; Cheng, Xueqi; Jin, Shuyuan

    2013-01-01

    Privacy information is prone to be leaked by illegal software providers with various motivations. Privacy leak behavior has thus become an important research issue of cyber security. However, existing approaches can only qualitatively analyze privacy leak behavior of software applications. No quantitative approach, to the best of our knowledge, has been developed in the open literature. To fill this gap, in this paper we propose for the first time four quantitative metrics, namely, possibility, severity, crypticity, and manipulability, for privacy leak behavior analysis based on Privacy Petri Net (PPN). In order to compare the privacy leak behavior among different software, we further propose a comprehensive metric, namely, overall leak degree, based on these four metrics. Finally, we validate the effectiveness of the proposed approach using real-world software applications. The experimental results demonstrate that our approach can quantitatively analyze the privacy leak behaviors of various software types and reveal their characteristics from different aspects. PMID:24066046

  12. Comprehensive quantitative analysis on privacy leak behavior.

    PubMed

    Fan, Lejun; Wang, Yuanzhuo; Jin, Xiaolong; Li, Jingyuan; Cheng, Xueqi; Jin, Shuyuan

    2013-01-01

    Privacy information is prone to be leaked by illegal software providers with various motivations. Privacy leak behavior has thus become an important research issue of cyber security. However, existing approaches can only qualitatively analyze privacy leak behavior of software applications. No quantitative approach, to the best of our knowledge, has been developed in the open literature. To fill this gap, in this paper we propose for the first time four quantitative metrics, namely, possibility, severity, crypticity, and manipulability, for privacy leak behavior analysis based on Privacy Petri Net (PPN). In order to compare the privacy leak behavior among different software, we further propose a comprehensive metric, namely, overall leak degree, based on these four metrics. Finally, we validate the effectiveness of the proposed approach using real-world software applications. The experimental results demonstrate that our approach can quantitatively analyze the privacy leak behaviors of various software types and reveal their characteristics from different aspects.

  13. New Parameterization of Neutron Absorption Cross Sections

    NASA Technical Reports Server (NTRS)

    Tripathi, Ram K.; Wilson, John W.; Cucinotta, Francis A.

    1997-01-01

    Recent parameterization of absorption cross sections for any system of charged ion collisions, including proton-nucleus collisions, is extended for neutron-nucleus collisions valid from approx. 1 MeV to a few GeV, thus providing a comprehensive picture of absorption cross sections for any system of collision pairs (charged or uncharged). The parameters are associated with the physics of the problem. At lower energies, optical potential at the surface is important, and the Pauli operator plays an increasingly important role at intermediate energies. The agreement between the calculated and experimental data is better than earlier published results.

  14. Testing Reading Comprehension of Theoretical Discourse with Cloze.

    ERIC Educational Resources Information Center

    Greene, Benjamin B., Jr.

    2001-01-01

    Presents evidence from a large sample of reading test scores for the validity of cloze-based assessments of reading comprehension for the discourse typically encountered in introductory college economics textbooks. Notes that results provide strong evidence that appropriately designed cloze tests permit valid assessments of reading comprehension…

  15. Viscosity and diffusivity in melts: from unary to multicomponent systems

    NASA Astrophysics Data System (ADS)

    Chen, Weimin; Zhang, Lijun; Du, Yong; Huang, Baiyun

    2014-05-01

    Viscosity and diffusivity, two important transport coefficients, are systematically investigated from unary melt to binary to multicomponent melts in the present work. By coupling with Kaptay's viscosity equation of pure liquid metals and effective radii of diffusion species, the Sutherland equation is modified by taking the size effect into account, and further derived into an Arrhenius formula for the convenient usage. Its reliability for predicting self-diffusivity and impurity diffusivity in unary liquids is then validated by comparing the calculated self-diffusivities and impurity diffusivities in liquid Al- and Fe-based alloys with the experimental and the assessed data. Moreover, the Kozlov model was chosen among various viscosity models as the most reliable one to reproduce the experimental viscosities in binary and multicomponent melts. Based on the reliable viscosities calculated from the Kozlov model, the modified Sutherland equation is utilized to predict the tracer diffusivities in binary and multicomponent melts, and validated in Al-Cu, Al-Ni and Al-Ce-Ni melts. Comprehensive comparisons between the calculated results and the literature data indicate that the experimental tracer diffusivities and the theoretical ones can be well reproduced by the present calculations. In addition, the vacancy-wind factor in binary liquid Al-Ni alloys with the increasing temperature is also discussed. What's more, the calculated inter-diffusivities in liquid Al-Cu, Al-Ni and Al-Ag-Cu alloys are also in excellent agreement with the measured and theoretical data. Comparisons between the simulated concentration profiles and the measured ones in Al-Cu, Al-Ce-Ni and Al-Ag-Cu melts are further used to validate the present calculation method.

  16. Does Validation during Language Comprehension Depend on an Evaluative Mindset?

    ERIC Educational Resources Information Center

    Isberner, Maj-Britt; Richter, Tobias

    2014-01-01

    Whether information is routinely and nonstrategically evaluated for truth during comprehension is still a point of contention. Previous studies supporting the assumption of nonstrategic validation have used a Stroop-like paradigm in which participants provided yes/no judgments in tasks unrelated to the truth or plausibility of the experimental…

  17. Measuring Speech Comprehensibility in Students with Down Syndrome

    PubMed Central

    Woynaroski, Tiffany; Camarata, Stephen

    2016-01-01

    Purpose There is an ongoing need to develop assessments of spontaneous speech that focus on whether the child's utterances are comprehensible to listeners. This study sought to identify the attributes of a stable ratings-based measure of speech comprehensibility, which enabled examining the criterion-related validity of an orthography-based measure of the comprehensibility of conversational speech in students with Down syndrome. Method Participants were 10 elementary school students with Down syndrome and 4 unfamiliar adult raters. Averaged across-observer Likert ratings of speech comprehensibility were called a ratings-based measure of speech comprehensibility. The proportion of utterance attempts fully glossed constituted an orthography-based measure of speech comprehensibility. Results Averaging across 4 raters on four 5-min segments produced a reliable (G = .83) ratings-based measure of speech comprehensibility. The ratings-based measure was strongly (r > .80) correlated with the orthography-based measure for both the same and different conversational samples. Conclusion Reliable and valid measures of speech comprehensibility are achievable with the resources available to many researchers and some clinicians. PMID:27299989

  18. Teaching High School Biology Students to Coordinate Text and Diagrams: Relations with Transfer, Effort, and Spatial Skill

    NASA Astrophysics Data System (ADS)

    Bergey, Bradley W.; Cromley, Jennifer G.; Newcombe, Nora S.

    2015-10-01

    There is growing evidence that targeted instruction can improve diagram comprehension, yet one of the skills identified in the diagram comprehension literature-coordinating multiple representations-has rarely been directly taught to students and tested as a classroom intervention. We created a Coordinating Multiple Representation (CMR) intervention that was an addition to an intervention focused on Conventions of Diagrams (COD) and tested their joint effects on diagram comprehension for near transfer (uninstructed biology diagrams), far transfer (uninstructed geology diagrams), and content learning (biology knowledge). The comparison group received instruction using a previously validated intervention that focused exclusively on COD. Participants were 9th-10th grade biology students (N = 158 from two schools), whose classes were randomly assigned to COD alone or COD + CMR conditions and studied with a pretest-posttest experimental design. Both groups showed significant growth in biology knowledge (d = .30-.53, for COD and COD + CMR, respectively) and biology diagram comprehension (d = .28-.57). Neither group showed far transfer. Analyses of student work products during the interventions suggest that gains were not simply due to the passage of time, because student effort was correlated with gains in both treatment groups. Directions for improving future CMR interventions are discussed.

  19. Comparing the Hydrologic and Watershed Processes between a Full Scale Stochastic Model Versus a Scaled Physical Model of Bell Canyon

    NASA Astrophysics Data System (ADS)

    Hernandez, K. F.; Shah-Fairbank, S.

    2016-12-01

    The San Dimas Experimental Forest has been designated as a research area by the United States Forest Service for use as a hydrologic testing facility since 1933 to investigate watershed hydrology of the 27 square mile land. Incorporation of a computer model provides validity to the testing of the physical model. This study focuses on San Dimas Experimental Forest's Bell Canyon, one of the triad of watersheds contained within the Big Dalton watershed of the San Dimas Experimental Forest. A scaled physical model was constructed of Bell Canyon to highlight watershed characteristics and each's effect on runoff. The physical model offers a comprehensive visualization of a natural watershed and can vary the characteristics of rainfall intensity, slope, and roughness through interchangeable parts and adjustments to the system. The scaled physical model is validated and calibrated through a HEC-HMS model to assure similitude of the system. Preliminary results of the physical model suggest that a 50-year storm event can be represented by a peak discharge of 2.2 X 10-3 cfs. When comparing the results to HEC-HMS, this equates to a flow relationship of approximately 1:160,000, which can be used to model other return periods. The completion of the Bell Canyon physical model can be used for educational instruction in the classroom, outreach in the community, and further research using the model as an accurate representation of the watershed present in the San Dimas Experimental Forest.

  20. Genome-Scale Metabolic Model for the Green Alga Chlorella vulgaris UTEX 395 Accurately Predicts Phenotypes under Autotrophic, Heterotrophic, and Mixotrophic Growth Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuniga, Cristal; Li, Chien -Ting; Huelsman, Tyler

    The green microalgae Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organismmore » to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Moreover, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine.« less

  1. Genome-Scale Metabolic Model for the Green Alga Chlorella vulgaris UTEX 395 Accurately Predicts Phenotypes under Autotrophic, Heterotrophic, and Mixotrophic Growth Conditions

    DOE PAGES

    Zuniga, Cristal; Li, Chien -Ting; Huelsman, Tyler; ...

    2016-07-02

    The green microalgae Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organismmore » to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Moreover, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine.« less

  2. Genome-Scale Metabolic Model for the Green Alga Chlorella vulgaris UTEX 395 Accurately Predicts Phenotypes under Autotrophic, Heterotrophic, and Mixotrophic Growth Conditions.

    PubMed

    Zuñiga, Cristal; Li, Chien-Ting; Huelsman, Tyler; Levering, Jennifer; Zielinski, Daniel C; McConnell, Brian O; Long, Christopher P; Knoshaug, Eric P; Guarnieri, Michael T; Antoniewicz, Maciek R; Betenbaugh, Michael J; Zengler, Karsten

    2016-09-01

    The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. © 2016 American Society of Plant Biologists. All rights reserved.

  3. Genome-Scale Metabolic Model for the Green Alga Chlorella vulgaris UTEX 395 Accurately Predicts Phenotypes under Autotrophic, Heterotrophic, and Mixotrophic Growth Conditions1

    PubMed Central

    Zuñiga, Cristal; Li, Chien-Ting; Zielinski, Daniel C.; Guarnieri, Michael T.; Antoniewicz, Maciek R.; Zengler, Karsten

    2016-01-01

    The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. PMID:27372244

  4. Numerical modelling in friction lap joining of aluminium alloy and carbon-fiber-reinforced-plastic sheets

    NASA Astrophysics Data System (ADS)

    Das, A.; Bang, H. S.; Bang, H. S.

    2018-05-01

    Multi-material combinations of aluminium alloy and carbon-fiber-reinforced-plastics (CFRP) have gained attention in automotive and aerospace industries to enhance fuel efficiency and strength-to-weight ratio of components. Various limitations of laser beam welding, adhesive bonding and mechanical fasteners make these processes inefficient to join metal and CFRP sheets. Friction lap joining is an alternative choice for the same. Comprehensive studies in friction lap joining of aluminium to CFRP sheets are essential and scare in the literature. The present work reports a combined theoretical and experimental study in joining of AA5052 and CFRP sheets using friction lap joining process. A three-dimensional finite element based heat transfer model is developed to compute the temperature fields and thermal cycles. The computed results are validated extensively with the corresponding experimentally measured results.

  5. Integration and Validation of the Genome-Scale Metabolic Models of Pichia pastoris: A Comprehensive Update of Protein Glycosylation Pathways, Lipid and Energy Metabolism

    PubMed Central

    Tomàs-Gamisans, Màrius; Ferrer, Pau; Albiol, Joan

    2016-01-01

    Motivation Genome-scale metabolic models (GEMs) are tools that allow predicting a phenotype from a genotype under certain environmental conditions. GEMs have been developed in the last ten years for a broad range of organisms, and are used for multiple purposes such as discovering new properties of metabolic networks, predicting new targets for metabolic engineering, as well as optimizing the cultivation conditions for biochemicals or recombinant protein production. Pichia pastoris is one of the most widely used organisms for heterologous protein expression. There are different GEMs for this methylotrophic yeast of which the most relevant and complete in the published literature are iPP668, PpaMBEL1254 and iLC915. However, these three models differ regarding certain pathways, terminology for metabolites and reactions and annotations. Moreover, GEMs for some species are typically built based on the reconstructed models of related model organisms. In these cases, some organism-specific pathways could be missing or misrepresented. Results In order to provide an updated and more comprehensive GEM for P. pastoris, we have reconstructed and validated a consensus model integrating and merging all three existing models. In this step a comprehensive review and integration of the metabolic pathways included in each one of these three versions was performed. In addition, the resulting iMT1026 model includes a new description of some metabolic processes. Particularly new information described in recently published literature is included, mainly related to fatty acid and sphingolipid metabolism, glycosylation and cell energetics. Finally the reconstructed model was tested and validated, by comparing the results of the simulations with available empirical physiological datasets results obtained from a wide range of experimental conditions, such as different carbon sources, distinct oxygen availability conditions, as well as producing of two different recombinant proteins. In these simulations, the iMT1026 model has shown a better performance than the previous existing models. PMID:26812499

  6. Integration and Validation of the Genome-Scale Metabolic Models of Pichia pastoris: A Comprehensive Update of Protein Glycosylation Pathways, Lipid and Energy Metabolism.

    PubMed

    Tomàs-Gamisans, Màrius; Ferrer, Pau; Albiol, Joan

    2016-01-01

    Genome-scale metabolic models (GEMs) are tools that allow predicting a phenotype from a genotype under certain environmental conditions. GEMs have been developed in the last ten years for a broad range of organisms, and are used for multiple purposes such as discovering new properties of metabolic networks, predicting new targets for metabolic engineering, as well as optimizing the cultivation conditions for biochemicals or recombinant protein production. Pichia pastoris is one of the most widely used organisms for heterologous protein expression. There are different GEMs for this methylotrophic yeast of which the most relevant and complete in the published literature are iPP668, PpaMBEL1254 and iLC915. However, these three models differ regarding certain pathways, terminology for metabolites and reactions and annotations. Moreover, GEMs for some species are typically built based on the reconstructed models of related model organisms. In these cases, some organism-specific pathways could be missing or misrepresented. In order to provide an updated and more comprehensive GEM for P. pastoris, we have reconstructed and validated a consensus model integrating and merging all three existing models. In this step a comprehensive review and integration of the metabolic pathways included in each one of these three versions was performed. In addition, the resulting iMT1026 model includes a new description of some metabolic processes. Particularly new information described in recently published literature is included, mainly related to fatty acid and sphingolipid metabolism, glycosylation and cell energetics. Finally the reconstructed model was tested and validated, by comparing the results of the simulations with available empirical physiological datasets results obtained from a wide range of experimental conditions, such as different carbon sources, distinct oxygen availability conditions, as well as producing of two different recombinant proteins. In these simulations, the iMT1026 model has shown a better performance than the previous existing models.

  7. The CPT Reading Comprehension Test: A Validity Study.

    ERIC Educational Resources Information Center

    Napoli, Anthony R.; Raymond, Lanette A.; Coffey, Cheryl A.; Bosco, Diane M.

    1998-01-01

    Describes a study done at Suffolk County Community College (New York) that assessed the validity of the College Board's Computerized Placement Test in Reading Comprehension (CPT-R) by comparing test results of 1,154 freshmen with the results of the Degree of Power Reading Test. Results confirmed the CPT-R's reliability in identifying basic…

  8. Developing and Validating Proof Comprehension Tests in Undergraduate Mathematics

    ERIC Educational Resources Information Center

    Mejía-Ramos, Juan Pablo; Lew, Kristen; de la Torre, Jimmy; Weber, Keith

    2017-01-01

    In this article, we describe and illustrate the process by which we developed and validated short, multiple-choice, reliable tests to assess undergraduate students' comprehension of three mathematical proofs. We discuss the purpose for each stage and how it benefited the design of our instruments. We also suggest ways in which this process could…

  9. Network news: prime time for systems biology of the plant circadian clock.

    PubMed

    McClung, C Robertson; Gutiérrez, Rodrigo A

    2010-12-01

    Whole-transcriptome analyses have established that the plant circadian clock regulates virtually every plant biological process and most prominently hormonal and stress response pathways. Systems biology efforts have successfully modeled the plant central clock machinery and an iterative process of model refinement and experimental validation has contributed significantly to the current view of the central clock machinery. The challenge now is to connect this central clock to the output pathways for understanding how the plant circadian clock contributes to plant growth and fitness in a changing environment. Undoubtedly, systems approaches will be needed to integrate and model the vastly increased volume of experimental data in order to extract meaningful biological information. Thus, we have entered an era of systems modeling, experimental testing, and refinement. This approach, coupled with advances from the genetic and biochemical analyses of clock function, is accelerating our progress towards a comprehensive understanding of the plant circadian clock network. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; McCrea, Andrew C.

    2009-01-01

    The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.

  11. Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; McCrea, Andrew C.

    2010-01-01

    The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.

  12. Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator

    NASA Astrophysics Data System (ADS)

    Chitarin, G.; Agostinetti, P.; Gallo, A.; Marconato, N.; Nakano, H.; Serianni, G.; Takeiri, Y.; Tsumori, K.

    2011-09-01

    For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of the BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.

  13. Simulation and experimental research of 1MWe solar tower power plant in China

    NASA Astrophysics Data System (ADS)

    Yu, Qiang; Wang, Zhifeng; Xu, Ershu

    2016-05-01

    The establishment of a reliable simulation system for a solar tower power plant can greatly increase the economic and safety performance of the whole system. In this paper, a dynamic model of the 1MWe Solar Tower Power Plant at Badaling in Beijing is developed based on the "STAR-90" simulation platform, including the heliostat field, the central receiver system (water/steam), etc. The dynamic behavior of the global CSP plant can be simulated. In order to verify the validity of simulation system, a complete experimental process was synchronously simulated by repeating the same operating steps based on the simulation platform, including the locations and number of heliostats, the mass flow of the feed water, etc. According to the simulation and experimental results, some important parameters are taken out to make a deep comparison. The results show that there is good alignment between the simulations and the experimental results and that the error range can be acceptable considering the error of the models. In the end, a comprehensive and deep analysis on the error source is carried out according to the comparative results.

  14. Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chitarin, G.; University of Padova, Dept. of Management and Engineering, strad. S. Nicola, 36100 Vicenza; Agostinetti, P.

    2011-09-26

    For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of themore » BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.« less

  15. Recent Progress and Future Plans for Fusion Plasma Synthetic Diagnostics Platform

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Kramer, Gerrit; Tang, William; Tobias, Benjamin; Valeo, Ernest; Churchill, Randy; Hausammann, Loic

    2015-11-01

    The Fusion Plasma Synthetic Diagnostics Platform (FPSDP) is a Python package developed at the Princeton Plasma Physics Laboratory. It is dedicated to providing an integrated programmable environment for applying a modern ensemble of synthetic diagnostics to the experimental validation of fusion plasma simulation codes. The FPSDP will allow physicists to directly compare key laboratory measurements to simulation results. This enables deeper understanding of experimental data, more realistic validation of simulation codes, quantitative assessment of existing diagnostics, and new capabilities for the design and optimization of future diagnostics. The Fusion Plasma Synthetic Diagnostics Platform now has data interfaces for the GTS and XGC-1 global particle-in-cell simulation codes with synthetic diagnostic modules including: (i) 2D and 3D Reflectometry; (ii) Beam Emission Spectroscopy; and (iii) 1D Electron Cyclotron Emission. Results will be reported on the delivery of interfaces for the global electromagnetic PIC code GTC, the extended MHD M3D-C1 code, and the electromagnetic hybrid NOVAK eigenmode code. Progress toward development of a more comprehensive 2D Electron Cyclotron Emission module will also be discussed. This work is supported by DOE contract #DEAC02-09CH11466.

  16. Comprehensive Validation of an Intermittency Transport Model for Transitional Low-Pressure Turbine Flows

    NASA Technical Reports Server (NTRS)

    Suzen, Y. B.; Huang, P. G.

    2005-01-01

    A transport equation for the intermittency factor is employed to predict transitional flows under the effects of pressure gradients, freestream turbulence intensities, Reynolds number variations, flow separation and reattachment. and unsteady wake-blade interactions representing diverse operating conditions encountered in low-pressure turbines. The intermittent behaviour of the transitional flows is taken into account and incorporated into computations by modifying the eddy viscosity, Mu(sub t), with the intermittency factor, gamma. Turbulent quantities are predicted by using Menter's two-equation turbulence model (SST). The onset location of transition is obtained from correlations based on boundary-layer momentum thickness, acceleration parameter, and turbulence intensity. The intermittency factor is obtained from a transport model which can produce both the experimentally observed streamwise variation of intermittency and a realistic profile in the cross stream direction. The intermittency transport model is tested and validated against several well documented low pressure turbine experiments ranging from flat plate cases to unsteady wake-blade interaction experiments. Overall, good agreement between the experimental data and computational results is obtained illustrating the predicting capabilities of the model and the current intermittency transport modelling approach for transitional flow simulations.

  17. Development and validation of a comprehensive model for map of fruits based on enzyme kinetics theory and arrhenius relation.

    PubMed

    Mangaraj, S; K Goswami, T; Mahajan, P V

    2015-07-01

    MAP is a dynamic system where respiration of the packaged product and gas permeation through the packaging film takes place simultaneously. The desired level of O2 and CO2 in a package is achieved by matching film permeation rates for O2 and CO2 with respiration rate of the packaged product. A mathematical model for MAP of fresh fruits applying enzyme kinetics based respiration equation coupled with the Arrhenious type model was developed. The model was solved numerically using MATLAB programme. The model was used to determine the time to reach to the equilibrium concentration inside the MA package and the level of O2 and CO2 concentration at equilibrium state. The developed model for prediction of equilibrium O2 and CO2 concentration was validated using experimental data for MA packaging of apple, guava and litchi.

  18. The problem of fouling in submerged membrane bioreactors - Model validation and experimental evidence

    NASA Astrophysics Data System (ADS)

    Tsibranska, Irene; Vlaev, Serafim; Tylkowski, Bartosz

    2018-01-01

    Integrating biological treatment with membrane separation has found a broad area of applications and industrial attention. Submerged membrane bioreactors (SMBRs), based on membrane modules immersed in the bioreactor, or side stream ones connected in recycle have been employed in different biotechnological processes for separation of thermally unstable products. Fouling is one of the most important challenges in the integrated SMBRs. A number of works are devoted to fouling analysis and its treatment, especially exploring the opportunity for enhanced fouling control in SMBRs. The main goal of the review is to provide a comprehensive yet concise overview of modeling the fouling in SMBRs in view of the problematics of model validation, either by real system measurements at different scales or by analysis of the obtained theoretical results. The review is focused on the current state of research applying computational fluid dynamics (CFD) modeling techniques.

  19. Fluid–Structure Interaction Analysis of Papillary Muscle Forces Using a Comprehensive Mitral Valve Model with 3D Chordal Structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toma, Milan; Jensen, Morten Ø.; Einstein, Daniel R.

    2015-07-17

    Numerical models of native heart valves are being used to study valve biomechanics to aid design and development of repair procedures and replacement devices. These models have evolved from simple two-dimensional approximations to complex three-dimensional, fully coupled fluid-structure interaction (FSI) systems. Such simulations are useful for predicting the mechanical and hemodynamic loading on implanted valve devices. A current challenge for improving the accuracy of these predictions is choosing and implementing modeling boundary conditions. In order to address this challenge, we are utilizing an advanced in-vitro system to validate FSI conditions for the mitral valve system. Explanted ovine mitral valves weremore » mounted in an in vitro setup, and structural data for the mitral valve was acquired with *CT. Experimental data from the in-vitro ovine mitral valve system were used to validate the computational model. As the valve closes, the hemodynamic data, high speed lea et dynamics, and force vectors from the in-vitro system were compared to the results of the FSI simulation computational model. The total force of 2.6 N per papillary muscle is matched by the computational model. In vitro and in vivo force measurements are important in validating and adjusting material parameters in computational models. The simulations can then be used to answer questions that are otherwise not possible to investigate experimentally. This work is important to maximize the validity of computational models of not just the mitral valve, but any biomechanical aspect using computational simulation in designing medical devices.« less

  20. Fluid-Structure Interaction Analysis of Papillary Muscle Forces Using a Comprehensive Mitral Valve Model with 3D Chordal Structure.

    PubMed

    Toma, Milan; Jensen, Morten Ø; Einstein, Daniel R; Yoganathan, Ajit P; Cochran, Richard P; Kunzelman, Karyn S

    2016-04-01

    Numerical models of native heart valves are being used to study valve biomechanics to aid design and development of repair procedures and replacement devices. These models have evolved from simple two-dimensional approximations to complex three-dimensional, fully coupled fluid-structure interaction (FSI) systems. Such simulations are useful for predicting the mechanical and hemodynamic loading on implanted valve devices. A current challenge for improving the accuracy of these predictions is choosing and implementing modeling boundary conditions. In order to address this challenge, we are utilizing an advanced in vitro system to validate FSI conditions for the mitral valve system. Explanted ovine mitral valves were mounted in an in vitro setup, and structural data for the mitral valve was acquired with [Formula: see text]CT. Experimental data from the in vitro ovine mitral valve system were used to validate the computational model. As the valve closes, the hemodynamic data, high speed leaflet dynamics, and force vectors from the in vitro system were compared to the results of the FSI simulation computational model. The total force of 2.6 N per papillary muscle is matched by the computational model. In vitro and in vivo force measurements enable validating and adjusting material parameters to improve the accuracy of computational models. The simulations can then be used to answer questions that are otherwise not possible to investigate experimentally. This work is important to maximize the validity of computational models of not just the mitral valve, but any biomechanical aspect using computational simulation in designing medical devices.

  1. Evaluation of passenger health risk assessment of sustainable indoor air quality monitoring in metro systems based on a non-Gaussian dynamic sensor validation method.

    PubMed

    Kim, MinJeong; Liu, Hongbin; Kim, Jeong Tai; Yoo, ChangKyoo

    2014-08-15

    Sensor faults in metro systems provide incorrect information to indoor air quality (IAQ) ventilation systems, resulting in the miss-operation of ventilation systems and adverse effects on passenger health. In this study, a new sensor validation method is proposed to (1) detect, identify and repair sensor faults and (2) evaluate the influence of sensor reliability on passenger health risk. To address the dynamic non-Gaussianity problem of IAQ data, dynamic independent component analysis (DICA) is used. To detect and identify sensor faults, the DICA-based squared prediction error and sensor validity index are used, respectively. To restore the faults to normal measurements, a DICA-based iterative reconstruction algorithm is proposed. The comprehensive indoor air-quality index (CIAI) that evaluates the influence of the current IAQ on passenger health is then compared using the faulty and reconstructed IAQ data sets. Experimental results from a metro station showed that the DICA-based method can produce an improved IAQ level in the metro station and reduce passenger health risk since it more accurately validates sensor faults than do conventional methods. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. A Curriculum-Based Measure of Language Comprehension for Preschoolers: Reliability and Validity of the Assessment of Story Comprehension

    ERIC Educational Resources Information Center

    Spencer, Trina D.; Goldstein, Howard; Kelley, Elizabeth Spencer; Sherman, Amber; McCune, Luke

    2017-01-01

    Despite research demonstrating the importance of language comprehension to later reading abilities, curriculum-based measures to assess language comprehension abilities in preschoolers remain lacking. The Assessment of Story Comprehension (ASC) features brief, child-relevant stories and a series of literal and inferential questions with a focus on…

  3. A Curriculum-Based Measure of Language Comprehension for Preschoolers: Reliability and Validity of the Assessment of Story Comprehension

    ERIC Educational Resources Information Center

    Spencer, Trina D.; Goldstein, Howard; Kelley, Elizabeth Spencer; Sherman, Amber; McCune, Luke

    2017-01-01

    Despite research demonstrating the importance of language comprehension to later reading abilities, curriculum based measures to assess language comprehension abilities in preschoolers remain lacking. The Assessment of Story Comprehension (ASC) features brief, child-relevant stories and a series of literal and inferential questions with a focus on…

  4. University Undergraduate Science Students' Validation and Comprehension of Written Proof in the Context of Infinite Series

    ERIC Educational Resources Information Center

    Moru, Eunice Kolitsoe; Nchejane, John; Ramollo, Motlatsi; Rammea, Lisema

    2017-01-01

    The reported study explored undergraduate science students' validation and comprehension of written proofs, reasons given either to accept or reject mathematical procedures employed in the proofs, and the difficulties students encountered in reading the proofs. The proofs were constructed using both the Comparison and the Integral tests in the…

  5. Validity Evidence for the Test of Silent Reading Efficiency and Comprehension (TOSREC)

    ERIC Educational Resources Information Center

    Johnson, Evelyn S.; Pool, Juli L.; Carter, Deborah R.

    2011-01-01

    An essential component of a response to intervention (RTI) framework is a screening process that is both accurate and efficient. The purpose of this study was to analyze the validity evidence for the "Test of Silent Reading Efficiency and Comprehension" (TOSREC) to determine its potential for use within a screening process. Participants included…

  6. Confirmatory Factor Analysis of the TerraNova Comprehensive Tests of Basic Skills/5

    ERIC Educational Resources Information Center

    Stevens, Joseph J.; Zvoch, Keith

    2007-01-01

    Confirmatory factor analysis was used to explore the internal validity of scores on the TerraNova Comprehensive Tests of Basic Skills/5 using samples from a southwestern school district and standardization samples reported by the publisher. One of the strengths claimed for battery-type achievement tests is provision of reliable and valid samples…

  7. Comprehension of Multiple Documents with Conflicting Information: A Two-Step Model of Validation

    ERIC Educational Resources Information Center

    Richter, Tobias; Maier, Johanna

    2017-01-01

    In this article, we examine the cognitive processes that are involved when readers comprehend conflicting information in multiple texts. Starting from the notion of routine validation during comprehension, we argue that readers' prior beliefs may lead to a biased processing of conflicting information and a one-sided mental model of controversial…

  8. Harmonizing the MSSM with the Galactic Center excess

    NASA Astrophysics Data System (ADS)

    Butter, Anja; Murgia, Simona; Plehn, Tilman; Tait, Tim M. P.

    2017-08-01

    The minimal supersymmetric setup offers a comprehensive framework to interpret the Fermi-LAT Galactic Center excess. Taking into account experimental, theoretical, and astrophysical uncertainties we can identify valid parameter regions linked to different annihilation channels. They extend to dark matter masses above 250 GeV. There exists a very mild tension between the observed relic density and the annihilation rate in the center of our Galaxy for specific channels. The strongest additional constraints come from the new generation of direct detection experiments, ruling out much of the light and intermediate dark matter mass regime and giving preference to heavier dark matter annihilating into a pair of top quarks.

  9. Failure analysis of energy storage spring in automobile composite brake chamber

    NASA Astrophysics Data System (ADS)

    Luo, Zai; Wei, Qing; Hu, Xiaofeng

    2015-02-01

    This paper set energy storage spring of parking brake cavity, part of automobile composite brake chamber, as the research object. And constructed the fault tree model of energy storage spring which caused parking brake failure based on the fault tree analysis method. Next, the parking brake failure model of energy storage spring was established by analyzing the working principle of composite brake chamber. Finally, the data of working load and the push rod stroke measured by comprehensive test-bed valve was used to validate the failure model above. The experimental result shows that the failure model can distinguish whether the energy storage spring is faulted.

  10. microPIR2: a comprehensive database for human–mouse comparative study of microRNA–promoter interactions

    PubMed Central

    Piriyapongsa, Jittima; Bootchai, Chaiwat; Ngamphiw, Chumpol; Tongsima, Sissades

    2014-01-01

    microRNA (miRNA)–promoter interaction resource (microPIR) is a public database containing over 15 million predicted miRNA target sites located within human promoter sequences. These predicted targets are presented along with their related genomic and experimental data, making the microPIR database the most comprehensive repository of miRNA promoter target sites. Here, we describe major updates of the microPIR database including new target predictions in the mouse genome and revised human target predictions. The updated database (microPIR2) now provides ∼80 million human and 40 million mouse predicted target sites. In addition to being a reference database, microPIR2 is a tool for comparative analysis of target sites on the promoters of human–mouse orthologous genes. In particular, this new feature was designed to identify potential miRNA–promoter interactions conserved between species that could be stronger candidates for further experimental validation. We also incorporated additional supporting information to microPIR2 such as nuclear and cytoplasmic localization of miRNAs and miRNA–disease association. Extra search features were also implemented to enable various investigations of targets of interest. Database URL: http://www4a.biotec.or.th/micropir2 PMID:25425035

  11. Complete Proteomic-Based Enzyme Reaction and Inhibition Kinetics Reveal How Monolignol Biosynthetic Enzyme Families Affect Metabolic Flux and Lignin in Populus trichocarpa[W

    PubMed Central

    Wang, Jack P.; Naik, Punith P.; Chen, Hsi-Chuan; Shi, Rui; Lin, Chien-Yuan; Liu, Jie; Shuford, Christopher M.; Li, Quanzi; Sun, Ying-Hsuan; Tunlaya-Anukit, Sermsawat; Williams, Cranos M.; Muddiman, David C.; Ducoste, Joel J.; Sederoff, Ronald R.; Chiang, Vincent L.

    2014-01-01

    We established a predictive kinetic metabolic-flux model for the 21 enzymes and 24 metabolites of the monolignol biosynthetic pathway using Populus trichocarpa secondary differentiating xylem. To establish this model, a comprehensive study was performed to obtain the reaction and inhibition kinetic parameters of all 21 enzymes based on functional recombinant proteins. A total of 104 Michaelis-Menten kinetic parameters and 85 inhibition kinetic parameters were derived from these enzymes. Through mass spectrometry, we obtained the absolute quantities of all 21 pathway enzymes in the secondary differentiating xylem. This extensive experimental data set, generated from a single tissue specialized in wood formation, was used to construct the predictive kinetic metabolic-flux model to provide a comprehensive mathematical description of the monolignol biosynthetic pathway. The model was validated using experimental data from transgenic P. trichocarpa plants. The model predicts how pathway enzymes affect lignin content and composition, explains a long-standing paradox regarding the regulation of monolignol subunit ratios in lignin, and reveals novel mechanisms involved in the regulation of lignin biosynthesis. This model provides an explanation of the effects of genetic and transgenic perturbations of the monolignol biosynthetic pathway in flowering plants. PMID:24619611

  12. The Comprehension and Validation of Social Information.

    ERIC Educational Resources Information Center

    Wyer, Robert S., Jr.; Radvansky, Gabriel A.

    1999-01-01

    Proposes a theory of social cognition to account for the comprehension and verification of social information. The theory views comprehension as a process of constructing situation models of new information on the basis of previously formed models about its referents. The comprehension of both single statements and multiple pieces of information…

  13. Experimental evaluation of the certification-trail method

    NASA Technical Reports Server (NTRS)

    Sullivan, Gregory F.; Wilson, Dwight S.; Masson, Gerald M.; Itoh, Mamoru; Smith, Warren W.; Kay, Jonathan S.

    1993-01-01

    Certification trails are a recently introduced and promising approach to fault-detection and fault-tolerance. A comprehensive attempt to assess experimentally the performance and overall value of the method is reported. The method is applied to algorithms for the following problems: huffman tree, shortest path, minimum spanning tree, sorting, and convex hull. Our results reveal many cases in which an approach using certification-trails allows for significantly faster overall program execution time than a basic time redundancy-approach. Algorithms for the answer-validation problem for abstract data types were also examined. This kind of problem provides a basis for applying the certification-trail method to wide classes of algorithms. Answer-validation solutions for two types of priority queues were implemented and analyzed. In both cases, the algorithm which performs answer-validation is substantially faster than the original algorithm for computing the answer. Next, a probabilistic model and analysis which enables comparison between the certification-trail method and the time-redundancy approach were presented. The analysis reveals some substantial and sometimes surprising advantages for ther certification-trail method. Finally, the work our group performed on the design and implementation of fault injection testbeds for experimental analysis of the certification trail technique is discussed. This work employs two distinct methodologies, software fault injection (modification of instruction, data, and stack segments of programs on a Sun Sparcstation ELC and on an IBM 386 PC) and hardware fault injection (control, address, and data lines of a Motorola MC68000-based target system pulsed at logical zero/one values). Our results indicate the viability of the certification trail technique. It is also believed that the tools developed provide a solid base for additional exploration.

  14. Complex and elementary histological scoring systems for articular cartilage repair.

    PubMed

    Orth, Patrick; Madry, Henning

    2015-08-01

    The repair of articular cartilage defects is increasingly moving into the focus of experimental and clinical investigations. Histological analysis is the gold standard for a valid and objective evaluation of cartilaginous repair tissue and predominantly relies on the use of established scoring systems. In the past three decades, numerous elementary and complex scoring systems have been described and modified, including those of O'Driscoll, Pineda, Wakitani, Sellers and Fortier for entire defects as well as those according to the International Cartilage Repair Society (ICRS-I/II) for osteochondral tissue biopsies. Yet, this coexistence of different grading scales inconsistently addressing diverse parameters may impede comparability between reported study outcomes. Furthermore, validation of these histological scoring systems has only seldom been performed to date. The aim of this review is (1) to give a comprehensive overview and to compare the most important established histological scoring systems for articular cartilage repair, (2) to describe their specific advantages and pitfalls, and (3) to provide valid recommendations for their use in translational and clinical studies of articular cartilage repair.

  15. [A new teaching mode improves the effect of comprehensive experimental teaching of genetics].

    PubMed

    Fenghua, He; Jieqiang, Li; Biyan, Zhu; Feng, Gao

    2015-04-01

    To improve the research atmosphere in genetics experimental teaching and develop students' creativity in research, we carried out a reform in comprehensive experimental teaching which is one of important modules for genetics practice. In our new student-centered teaching mode, they chose research topics, performed experiments and took innovative approaches independently. With the open laboratory and technical platform in our experimental teaching center, students finished their experiments and were required to write a mini-research article. Comprehensive experimental teaching is a scientific research practice before they complete their thesis. Through this teaching practice, students' research skills in experimental design and operation, data analysis and results presentation, as well as their collaboration spirit and innovation consciousness are strengthened.

  16. An integrated multiscale river basin observing system in the Heihe River Basin, northwest China

    NASA Astrophysics Data System (ADS)

    Li, X.; Liu, S.; Xiao, Q.; Ma, M.; Jin, R.; Che, T.

    2015-12-01

    Using the watershed as the unit to establish an integrated watershed observing system has been an important trend in integrated eco-hydrologic studies in the past ten years. Thus far, a relatively comprehensive watershed observing system has been established in the Heihe River Basin, northwest China. In addition, two comprehensive remote sensing hydrology experiments have been conducted sequentially in the Heihe River Basin, including the Watershed Allied Telemetry Experimental Research (WATER) (2007-2010) and the Heihe Watershed Allied Telemetry Experimental Research (HiWATER) (2012-2015). Among these two experiments, an important result of WATER has been the generation of some multi-scale, high-quality comprehensive datasets, which have greatly supported the development, improvement and validation of a series of ecological, hydrological and quantitative remote-sensing models. The goal of a breakthrough for solving the "data bottleneck" problem has been achieved. HiWATER was initiated in 2012. This project has established a world-class hydrological and meteorological observation network, a flux measurement matrix and an eco-hydrological wireless sensor network. A set of super high-resolution airborne remote-sensing data has also been obtained. In addition, there has been important progress with regard to the scaling research. Furthermore, the automatic acquisition, transmission, quality control and remote control of the observational data has been realized through the use of wireless sensor network technology. The observation and information systems have been highly integrated, which will provide a solid foundation for establishing a research platform that integrates observation, data management, model simulation, scenario analysis and decision-making support to foster 21st-century watershed science in China.

  17. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    NASA Astrophysics Data System (ADS)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-05-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  18. Three-dimensional flow measurements in a tesla turbine rotor

    NASA Astrophysics Data System (ADS)

    Fuchs, Thomas; Schosser, Constantin; Hain, Rainer; Kaehler, Christian

    2015-11-01

    Tesla turbines are fluid mechanical devices converting flow energy into rotation energy by two physical effects: friction and adhesion. The advantages of the tesla turbine are its simple and robust design, as well as its scalability, which makes it suitable for custom power supply solutions, and renewable energy applications. To this day, there is a lack of experimental data to validate theoretical studies, and CFD simulations of these turbines. This work presents a comprehensive analysis of the flow through a tesla turbine rotor gap, with a gap height of only 0.5 mm, by means of three-dimensional Particle Tracking Velocimetry (3D-PTV). For laminar flows, the experimental results match the theory very well, since the measured flow profiles show the predicted second order parabolic shape in radial direction and a fourth order behavior in circumferential direction. In addition to these laminar measurements, turbulent flows at higher mass flow rates were investigated.

  19. Ultra-Precision Measurement and Control of Angle Motion in Piezo-Based Platforms Using Strain Gauge Sensors and a Robust Composite Controller

    PubMed Central

    Liu, Lei; Bai, Yu-Guang; Zhang, Da-Li; Wu, Zhi-Gang

    2013-01-01

    The measurement and control strategy of a piezo-based platform by using strain gauge sensors (SGS) and a robust composite controller is investigated in this paper. First, the experimental setup is constructed by using a piezo-based platform, SGS sensors, an AD5435 platform and two voltage amplifiers. Then, the measurement strategy to measure the tip/tilt angles accurately in the order of sub-μrad is presented. A comprehensive composite control strategy design to enhance the tracking accuracy with a novel driving principle is also proposed. Finally, an experiment is presented to validate the measurement and control strategy. The experimental results demonstrate that the proposed measurement and control strategy provides accurate angle motion with a root mean square (RMS) error of 0.21 μrad, which is approximately equal to the noise level. PMID:23860316

  20. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    NASA Astrophysics Data System (ADS)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-01-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  1. cellPACK: A Virtual Mesoscope to Model and Visualize Structural Systems Biology

    PubMed Central

    Johnson, Graham T.; Autin, Ludovic; Al-Alusi, Mostafa; Goodsell, David S.; Sanner, Michel F.; Olson, Arthur J.

    2014-01-01

    cellPACK assembles computational models of the biological mesoscale, an intermediate scale (10−7–10−8m) between molecular and cellular biology. cellPACK’s modular architecture unites existing and novel packing algorithms to generate, visualize and analyze comprehensive 3D models of complex biological environments that integrate data from multiple experimental systems biology and structural biology sources. cellPACK is currently available as open source code, with tools for validation of models and with recipes and models for five biological systems: blood plasma, cytoplasm, synaptic vesicles, HIV and a mycoplasma cell. We have applied cellPACK to model distributions of HIV envelope protein to test several hypotheses for consistency with experimental observations. Biologists, educators, and outreach specialists can interact with cellPACK models, develop new recipes and perform packing experiments through scripting and graphical user interfaces at http://cellPACK.org. PMID:25437435

  2. Validity of CBM Measures of Oral Reading Fluency and Reading Comprehension on High-Stakes Reading Assessments in Grades 7 and 8

    ERIC Educational Resources Information Center

    Baker, Doris Luft; Biancarosa, Gina; Park, Bitnara Jasmine; Bousselot, Tracy; Smith, Jean-Louise; Baker, Scott K.; Kame'enui, Edward J.; Alonzo, Julie; Tindal, Gerald

    2015-01-01

    We examined the criterion validity and diagnostic efficiency of oral reading fluency (ORF), word reading accuracy, and reading comprehension (RC) for students in Grades 7 and 8 taking into account form effects of ORF, time of assessment, and individual differences, including student designations of limited English proficiency and special education…

  3. A National Study of the Validity and Utility of the Comprehensive Assessment of School Environment (CASE) Survey

    ERIC Educational Resources Information Center

    McGuffey, Amy R.

    2016-01-01

    A healthy school climate is necessary for improvement. The purpose of this study was to evaluate the construct validity and usability of the Comprehensive Assessment of School Environment (CASE) as it was purportedly realigned to the three dimensions of the Breaking Ranks Framework developed by the National Association of Secondary School…

  4. An interactive web-based application for Comprehensive Analysis of RNAi-screen Data.

    PubMed

    Dutta, Bhaskar; Azhir, Alaleh; Merino, Louis-Henri; Guo, Yongjian; Revanur, Swetha; Madhamshettiwar, Piyush B; Germain, Ronald N; Smith, Jennifer A; Simpson, Kaylene J; Martin, Scott E; Buehler, Eugen; Beuhler, Eugen; Fraser, Iain D C

    2016-02-23

    RNAi screens are widely used in functional genomics. Although the screen data can be susceptible to a number of experimental biases, many of these can be corrected by computational analysis. For this purpose, here we have developed a web-based platform for integrated analysis and visualization of RNAi screen data named CARD (for Comprehensive Analysis of RNAi Data; available at https://card.niaid.nih.gov). CARD allows the user to seamlessly carry out sequential steps in a rigorous data analysis workflow, including normalization, off-target analysis, integration of gene expression data, optimal thresholds for hit selection and network/pathway analysis. To evaluate the utility of CARD, we describe analysis of three genome-scale siRNA screens and demonstrate: (i) a significant increase both in selection of subsequently validated hits and in rejection of false positives, (ii) an increased overlap of hits from independent screens of the same biology and (iii) insight to microRNA (miRNA) activity based on siRNA seed enrichment.

  5. Comprehensive Characterization of Cancer Driver Genes and Mutations.

    PubMed

    Bailey, Matthew H; Tokheim, Collin; Porta-Pardo, Eduard; Sengupta, Sohini; Bertrand, Denis; Weerasinghe, Amila; Colaprico, Antonio; Wendl, Michael C; Kim, Jaegil; Reardon, Brendan; Ng, Patrick Kwok-Shing; Jeong, Kang Jin; Cao, Song; Wang, Zixing; Gao, Jianjiong; Gao, Qingsong; Wang, Fang; Liu, Eric Minwei; Mularoni, Loris; Rubio-Perez, Carlota; Nagarajan, Niranjan; Cortés-Ciriano, Isidro; Zhou, Daniel Cui; Liang, Wen-Wei; Hess, Julian M; Yellapantula, Venkata D; Tamborero, David; Gonzalez-Perez, Abel; Suphavilai, Chayaporn; Ko, Jia Yu; Khurana, Ekta; Park, Peter J; Van Allen, Eliezer M; Liang, Han; Lawrence, Michael S; Godzik, Adam; Lopez-Bigas, Nuria; Stuart, Josh; Wheeler, David; Getz, Gad; Chen, Ken; Lazar, Alexander J; Mills, Gordon B; Karchin, Rachel; Ding, Li

    2018-04-05

    Identifying molecular cancer drivers is critical for precision oncology. Multiple advanced algorithms to identify drivers now exist, but systematic attempts to combine and optimize them on large datasets are few. We report a PanCancer and PanSoftware analysis spanning 9,423 tumor exomes (comprising all 33 of The Cancer Genome Atlas projects) and using 26 computational tools to catalog driver genes and mutations. We identify 299 driver genes with implications regarding their anatomical sites and cancer/cell types. Sequence- and structure-based analyses identified >3,400 putative missense driver mutations supported by multiple lines of evidence. Experimental validation confirmed 60%-85% of predicted mutations as likely drivers. We found that >300 MSI tumors are associated with high PD-1/PD-L1, and 57% of tumors analyzed harbor putative clinically actionable events. Our study represents the most comprehensive discovery of cancer genes and mutations to date and will serve as a blueprint for future biological and clinical endeavors. Published by Elsevier Inc.

  6. Comprehensive Analysis of Immunological Synapse Phenotypes Using Supported Lipid Bilayers.

    PubMed

    Valvo, Salvatore; Mayya, Viveka; Seraia, Elena; Afrose, Jehan; Novak-Kotzer, Hila; Ebner, Daniel; Dustin, Michael L

    2017-01-01

    Supported lipid bilayers (SLB) formed on glass substrates have been a useful tool for study of immune cell signaling since the early 1980s. The mobility of lipid-anchored proteins in the system, first described for antibodies binding to synthetic phospholipid head groups, allows for the measurement of two-dimensional binding reactions and signaling processes in a single imaging plane over time or for fixed samples. The fragility of SLB and the challenges of building and validating individual substrates limit most experimenters to ~10 samples per day, perhaps increasing this few-fold when examining fixed samples. Successful experiments might then require further days to fully analyze. We present methods for automation of many steps in SLB formation, imaging in 96-well glass bottom plates, and analysis that enables >100-fold increase in throughput for fixed samples and wide-field fluorescence. This increased throughput will allow better coverage of relevant parameters and more comprehensive analysis of aspects of the immunological synapse that are well reconstituted by SLB.

  7. An interactive web-based application for Comprehensive Analysis of RNAi-screen Data

    PubMed Central

    Dutta, Bhaskar; Azhir, Alaleh; Merino, Louis-Henri; Guo, Yongjian; Revanur, Swetha; Madhamshettiwar, Piyush B.; Germain, Ronald N.; Smith, Jennifer A.; Simpson, Kaylene J.; Martin, Scott E.; Beuhler, Eugen; Fraser, Iain D. C.

    2016-01-01

    RNAi screens are widely used in functional genomics. Although the screen data can be susceptible to a number of experimental biases, many of these can be corrected by computational analysis. For this purpose, here we have developed a web-based platform for integrated analysis and visualization of RNAi screen data named CARD (for Comprehensive Analysis of RNAi Data; available at https://card.niaid.nih.gov). CARD allows the user to seamlessly carry out sequential steps in a rigorous data analysis workflow, including normalization, off-target analysis, integration of gene expression data, optimal thresholds for hit selection and network/pathway analysis. To evaluate the utility of CARD, we describe analysis of three genome-scale siRNA screens and demonstrate: (i) a significant increase both in selection of subsequently validated hits and in rejection of false positives, (ii) an increased overlap of hits from independent screens of the same biology and (iii) insight to microRNA (miRNA) activity based on siRNA seed enrichment. PMID:26902267

  8. The validity of upper-limb neurodynamic tests for detecting peripheral neuropathic pain.

    PubMed

    Nee, Robert J; Jull, Gwendolen A; Vicenzino, Bill; Coppieters, Michel W

    2012-05-01

    The validity of upper-limb neurodynamic tests (ULNTs) for detecting peripheral neuropathic pain (PNP) was assessed by reviewing the evidence on plausibility, the definition of a positive test, reliability, and concurrent validity. Evidence was identified by a structured search for peer-reviewed articles published in English before May 2011. The quality of concurrent validity studies was assessed with the Quality Assessment of Diagnostic Accuracy Studies tool, where appropriate. Biomechanical and experimental pain data support the plausibility of ULNTs. Evidence suggests that a positive ULNT should at least partially reproduce the patient's symptoms and that structural differentiation should change these symptoms. Data indicate that this definition of a positive ULNT is reliable when used clinically. Limited evidence suggests that the median nerve test, but not the radial nerve test, helps determine whether a patient has cervical radiculopathy. The median nerve test does not help diagnose carpal tunnel syndrome. These findings should be interpreted cautiously, because diagnostic accuracy might have been distorted by the investigators' definitions of a positive ULNT. Furthermore, patients with PNP who presented with increased nerve mechanosensitivity rather than conduction loss might have been incorrectly classified by electrophysiological reference standards as not having PNP. The only evidence for concurrent validity of the ulnar nerve test was a case study on cubital tunnel syndrome. We recommend that researchers develop more comprehensive reference standards for PNP to accurately assess the concurrent validity of ULNTs and continue investigating the predictive validity of ULNTs for prognosis or treatment response.

  9. A comprehensive review on the quasi-induced exposure technique.

    PubMed

    Jiang, Xinguo; Lyles, Richard W; Guo, Runhua

    2014-04-01

    The goal is to comprehensively examine the state-of-the-art applications and methodological development of quasi-induced exposure and consequently pinpoint the future research directions in terms of implementation guidelines, limitations, and validity tests. The paper conducts a comprehensive review on approximately 45 published papers relevant to quasi-induced exposure regarding four key topics of interest: applications, responsibility assignment, validation of assumptions, and methodological development. Specific findings include that: (1) there is no systematic data screening procedure in place and how the eliminated crash data will impact the responsibility assignment is generally unknown; (2) there is a lack of necessary efforts to assess the validity of assumptions prior to its application and the validation efforts are mostly restricted to the aggregated levels due to the limited availability of exposure truth; and (3) there is a deficiency of quantitative analyses to evaluate the magnitude and directions of bias as a result of injury risks and crash avoidance ability. The paper points out the future research directions and insights in terms of the validity tests and implementation guidelines. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Readability and Comprehension of Self-Report Binge Eating Measures

    PubMed Central

    Richards, Lauren K.; McHugh, R. Kathryn; Pratt, Elizabeth M.; Thompson-Brenner, Heather

    2013-01-01

    The validity of self-report binge eating instruments among individuals with limited literacy is uncertain. This study aims to evaluate reading grade level and multiple domains of comprehension of 13 commonly used self-report assessments of binge eating for use in low-literacy populations. We evaluated self-report binge eating measures with respect to reading grade levels, measure length, formatting and linguistic problems. Results: All measures were written at a reading grade level higher than is recommended for patient materials (above the 5th to 6th grade level), and contained several challenging elements related to comprehension. Correlational analyses suggested that readability and comprehension elements were distinct contributors to measure difficulty. Individuals with binge eating who have low levels of educational attainment or limited literacy are often underrepresented in measure validation studies. Validity of measures and accurate assessment of symptoms depends on an individual's ability to read and comprehend instructions and items, and these may be compromised in populations with lower levels of literacy. PMID:23557814

  11. Readability and comprehension of self-report binge eating measures.

    PubMed

    Richards, Lauren K; McHugh, R Kathryn; Pratt, Elizabeth M; Thompson-Brenner, Heather

    2013-04-01

    The validity of self-report binge eating instruments among individuals with limited literacy is uncertain. This study aims to evaluate reading grade level and multiple domains of comprehension of 13 commonly used self-report assessments of binge eating for use in low-literacy populations. We evaluated self-report binge eating measures with respect to reading grade levels, measure length, formatting and linguistic problems. All measures were written at a reading grade level higher than is recommended for patient materials (above the 5th to 6th grade level), and contained several challenging elements related to comprehension. Correlational analyses suggested that readability and comprehension elements were distinct contributors to measure difficulty. Individuals with binge eating who have low levels of educational attainment or limited literacy are often underrepresented in measure validation studies. Validity of measures and accurate assessment of symptoms depend on an individual's ability to read and comprehend instructions and items, and these may be compromised in populations with lower levels of literacy. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. The Respective Advantages and Disadvantages of Different Ways of Measuring the Instructional Sensitivity of Reading Comprehension Test Items.

    ERIC Educational Resources Information Center

    Perkins, Kyle

    In this paper four classes of procedures for measuring the instructional sensitivity of reading comprehension test items are reviewed. True experimental designs are not recommended because some of the most important reading comprehension variables do not lend themselves to experimental manipulation. "Ex post facto" factorial designs are…

  13. Is an observed non-co-linear RNA product spliced in trans, in cis or just in vitro?

    PubMed Central

    Yu, Chun-Ying; Liu, Hsiao-Jung; Hung, Li-Yuan; Kuo, Hung-Chih; Chuang, Trees-Juen

    2014-01-01

    Global transcriptome investigations often result in the detection of an enormous number of transcripts composed of non-co-linear sequence fragments. Such ‘aberrant’ transcript products may arise from post-transcriptional events or genetic rearrangements, or may otherwise be false positives (sequencing/alignment errors or in vitro artifacts). Moreover, post-transcriptionally non-co-linear (‘PtNcl’) transcripts can arise from trans-splicing or back-splicing in cis (to generate so-called ‘circular RNA’). Here, we collected previously-predicted human non-co-linear RNA candidates, and designed a validation procedure integrating in silico filters with multiple experimental validation steps to examine their authenticity. We showed that >50% of the tested candidates were in vitro artifacts, even though some had been previously validated by RT-PCR. After excluding the possibility of genetic rearrangements, we distinguished between trans-spliced and circular RNAs, and confirmed that these two splicing forms can share the same non-co-linear junction. Importantly, the experimentally-confirmed PtNcl RNA events and their corresponding PtNcl splicing types (i.e. trans-splicing, circular RNA, or both sharing the same junction) were all expressed in rhesus macaque, and some were even expressed in mouse. Our study thus describes an essential procedure for confirming PtNcl transcripts, and provides further insight into the evolutionary role of PtNcl RNA events, opening up this important, but understudied, class of post-transcriptional events for comprehensive characterization. PMID:25053845

  14. Symptom validity test performance and consistency of self-reported memory functioning of Operation Enduring Freedom/Operation Iraqi freedom veterans with positive Veteran Health Administration Comprehensive Traumatic Brain Injury evaluations.

    PubMed

    Russo, Arthur C

    2012-12-01

    Operation Enduring Freedom and Operation Iraqi Freedom combat veterans given definite diagnoses of mild Traumatic Brain Injury (TBI) during the Veteran Health Administration (VHA) Comprehensive TBI evaluation and reporting no post-deployment head injury were examined to assess (a) consistency of self-reported memory impairment and (b) symptom validity test (SVT) performance via a two-part study. Study 1 found that while 49 of 50 veterans reported moderate to very severe memory impairment during the VHA Comprehensive TBI evaluation, only 7 had reported any memory problem at the time of their Department of Defense (DOD) post-deployment health assessment. Study 2 found that of 38 veterans referred for neuropsychological evaluations following a positive VHA Comprehensive TBI evaluation, 68.4% failed the Word Memory Test, a forced choice memory recognition symptom validity task. Together, these studies raise questions concerning the use of veteran symptom self-report for TBI assessments and argue for the inclusion of SVTs and the expanded use of contemporaneous DOD records to improve the diagnostic accuracy of the VHA Comprehensive TBI evaluation.

  15. Concurrent Validity and Diagnostic Accuracy of the Dynamic Indicators of Basic Early Literacy Skills and the Comprehensive Test of Phonological Processing

    ERIC Educational Resources Information Center

    Hintze, John M.; Ryan, Amanda L.; Stoner, Gary

    2003-01-01

    The purpose of this study was to (a) examine the concurrent validity of the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) with the Comprehensive Test of Phonological Processing (CTOPP), and (b) explore the diagnostic accuracy of the DIBELS in predicting CTOPP performance using suggested and alternative cut-scores. Eighty-six students…

  16. Sooting turbulent jet flame: characterization and quantitative soot measurements

    NASA Astrophysics Data System (ADS)

    Köhler, M.; Geigle, K. P.; Meier, W.; Crosland, B. M.; Thomson, K. A.; Smallwood, G. J.

    2011-08-01

    Computational fluid dynamics (CFD) modelers require high-quality experimental data sets for validation of their numerical tools. Preferred features for numerical simulations of a sooting, turbulent test case flame are simplicity (no pilot flame), well-defined boundary conditions, and sufficient soot production. This paper proposes a non-premixed C2H4/air turbulent jet flame to fill this role and presents an extensive database for soot model validation. The sooting turbulent jet flame has a total visible flame length of approximately 400 mm and a fuel-jet Reynolds number of 10,000. The flame has a measured lift-off height of 26 mm which acts as a sensitive marker for CFD model validation, while this novel compiled experimental database of soot properties, temperature and velocity maps are useful for the validation of kinetic soot models and numerical flame simulations. Due to the relatively simple burner design which produces a flame with sufficient soot concentration while meeting modelers' needs with respect to boundary conditions and flame specifications as well as the present lack of a sooting "standard flame", this flame is suggested as a new reference turbulent sooting flame. The flame characterization presented here involved a variety of optical diagnostics including quantitative 2D laser-induced incandescence (2D-LII), shifted-vibrational coherent anti-Stokes Raman spectroscopy (SV-CARS), and particle image velocimetry (PIV). Producing an accurate and comprehensive characterization of a transient sooting flame was challenging and required optimization of these diagnostics. In this respect, we present the first simultaneous, instantaneous PIV, and LII measurements in a heavily sooting flame environment. Simultaneous soot and flow field measurements can provide new insights into the interaction between a turbulent vortex and flame chemistry, especially since soot structures in turbulent flames are known to be small and often treated in a statistical manner.

  17. Computational discovery and in vivo validation of hnf4 as a regulatory gene in planarian regeneration.

    PubMed

    Lobo, Daniel; Morokuma, Junji; Levin, Michael

    2016-09-01

    Automated computational methods can infer dynamic regulatory network models directly from temporal and spatial experimental data, such as genetic perturbations and their resultant morphologies. Recently, a computational method was able to reverse-engineer the first mechanistic model of planarian regeneration that can recapitulate the main anterior-posterior patterning experiments published in the literature. Validating this comprehensive regulatory model via novel experiments that had not yet been performed would add in our understanding of the remarkable regeneration capacity of planarian worms and demonstrate the power of this automated methodology. Using the Michigan Molecular Interactions and STRING databases and the MoCha software tool, we characterized as hnf4 an unknown regulatory gene predicted to exist by the reverse-engineered dynamic model of planarian regeneration. Then, we used the dynamic model to predict the morphological outcomes under different single and multiple knock-downs (RNA interference) of hnf4 and its predicted gene pathway interactors β-catenin and hh Interestingly, the model predicted that RNAi of hnf4 would rescue the abnormal regenerated phenotype (tailless) of RNAi of hh in amputated trunk fragments. Finally, we validated these predictions in vivo by performing the same surgical and genetic experiments with planarian worms, obtaining the same phenotypic outcomes predicted by the reverse-engineered model. These results suggest that hnf4 is a regulatory gene in planarian regeneration, validate the computational predictions of the reverse-engineered dynamic model, and demonstrate the automated methodology for the discovery of novel genes, pathways and experimental phenotypes. michael.levin@tufts.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Evaluation and Validation of Reference Genes for qRT-PCR Normalization in Frankliniella occidentalis (Thysanoptera:Thripidae)

    PubMed Central

    Zheng, Yu-Tao; Li, Hong-Bo; Lu, Ming-Xing; Du, Yu-Zhou

    2014-01-01

    Quantitative real time PCR (qRT-PCR) has emerged as a reliable and reproducible technique for studying gene expression analysis. For accurate results, the normalization of data with reference genes is particularly essential. Once the transcriptome sequencing of Frankliniella occidentalis was completed, numerous unigenes were identified and annotated. Unfortunately, there are no studies on the stability of reference genes used in F. occidentalis. In this work, seven candidate reference genes, including actin, 18S rRNA, H3, tubulin, GAPDH, EF-1 and RPL32, were evaluated for their suitability as normalization genes under different experimental conditions using the statistical software programs BestKeeper, geNorm, Normfinder and the comparative ΔCt method. Because the rankings of the reference genes provided by each of the four programs were different, we chose a user-friendly web-based comprehensive tool RefFinder to get the final ranking. The result demonstrated that EF-1 and RPL32 displayed the most stable expression in different developmental stages; RPL32 and GAPDH showed the most stable expression at high temperatures, while 18S and EF-1 exhibited the most stable expression at low temperatures. In this study, we validated the suitable reference genes in F. occidentalis for gene expression profiling under different experimental conditions. The choice of internal standard is very important in the normalization of the target gene expression levels, thus validating and selecting the best genes will help improve the quality of gene expression data of F. occidentalis. What is more, these validated reference genes could serve as the basis for the selection of candidate reference genes in other insects. PMID:25356721

  19. Evaluation and validation of reference genes for qRT-PCR normalization in Frankliniella occidentalis (Thysanoptera: Thripidae).

    PubMed

    Zheng, Yu-Tao; Li, Hong-Bo; Lu, Ming-Xing; Du, Yu-Zhou

    2014-01-01

    Quantitative real time PCR (qRT-PCR) has emerged as a reliable and reproducible technique for studying gene expression analysis. For accurate results, the normalization of data with reference genes is particularly essential. Once the transcriptome sequencing of Frankliniella occidentalis was completed, numerous unigenes were identified and annotated. Unfortunately, there are no studies on the stability of reference genes used in F. occidentalis. In this work, seven candidate reference genes, including actin, 18S rRNA, H3, tubulin, GAPDH, EF-1 and RPL32, were evaluated for their suitability as normalization genes under different experimental conditions using the statistical software programs BestKeeper, geNorm, Normfinder and the comparative ΔCt method. Because the rankings of the reference genes provided by each of the four programs were different, we chose a user-friendly web-based comprehensive tool RefFinder to get the final ranking. The result demonstrated that EF-1 and RPL32 displayed the most stable expression in different developmental stages; RPL32 and GAPDH showed the most stable expression at high temperatures, while 18S and EF-1 exhibited the most stable expression at low temperatures. In this study, we validated the suitable reference genes in F. occidentalis for gene expression profiling under different experimental conditions. The choice of internal standard is very important in the normalization of the target gene expression levels, thus validating and selecting the best genes will help improve the quality of gene expression data of F. occidentalis. What is more, these validated reference genes could serve as the basis for the selection of candidate reference genes in other insects.

  20. A numerical and experimental study on optimal design of multi-DOF viscoelastic supports for passive vibration control in rotating machinery

    NASA Astrophysics Data System (ADS)

    Ribeiro, Eduardo Afonso; Lopes, Eduardo Márcio de Oliveira; Bavastri, Carlos Alberto

    2017-12-01

    Viscoelastic materials have played an important role in passive vibration control. Nevertheless, the use of such materials in supports of rotating machines, aiming at controlling vibration, is more recent, mainly when these supports present additional complexities like multiple degrees of freedom and require accurate models to predict the dynamic behavior of viscoelastic materials working in a broad band of frequencies and temperatures. Previously, the authors propose a methodology for an optimal design of viscoelastic supports (VES) for vibration suppression in rotordynamics, which improves the dynamic prediction accuracy, the speed calculation, and the modeling of VES as complex structures. However, a comprehensive numerical study of the dynamics of rotor-VES systems, regarding the types and combinations of translational and rotational degrees of freedom (DOFs), accompanied by the corresponding experimental validation, is still lacking. This paper presents such a study considering different types and combinations of DOFs in addition to the simulation of their number of additional masses/inertias, as well as the kind and association of the applied viscoelastic materials (VEMs). The results - regarding unbalance frequency response, transmissibility and displacement due to static loads - lead to: 1) considering VES as complex structures which allow improving the efficacy in passive vibration control; 2) acknowledging the best configuration concerning DOFs and VEM choice and association for a practical application concerning passive vibration control and load resistance. The specific outcomes of the conducted experimental validation attest the accuracy of the proposed methodology.

  1. Rubble masonry response under cyclic actions: The experience of L’Aquila city (Italy)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonti, Roberta, E-mail: roberta.fonti@tum.de; Barthel, Rainer, E-mail: r.barthel@lrz.tu-muenchen.de; Formisano, Antonio, E-mail: antoform@unina.it

    2015-12-31

    Several methods of analysis are available in engineering practice to study old masonry constructions. Two commonly used approaches in the field of seismic engineering are global and local analyses. Despite several years of research in this field, the various methodologies suffer from a lack of comprehensive experimental validation. This is mainly due to the difficulty in simulating the many different kinds of masonry and, accordingly, the non-linear response under horizontal actions. This issue can be addressed by examining the local response of isolated panels under monotonic and/or alternate actions. Different testing methodologies are commonly used to identify the local responsemore » of old masonry. These range from simplified pull-out tests to sophisticated in-plane monotonic tests. However, there is a lack of both knowledge and critical comparison between experimental validations and numerical simulations. This is mainly due to the difficulties in implementing irregular settings within both simplified and advanced numerical analyses. Similarly, the simulation of degradation effects within laboratory tests is difficult with respect to old masonry in-situ boundary conditions. Numerical models, particularly on rubble masonry, are commonly simplified. They are mainly based on a kinematic chain of rigid blocks able to perform different “modes of damage” of structures subjected to horizontal actions. This paper presents an innovative methodology for testing; its aim is to identify a simplified model for out-of-plane response of rubbleworks with respect to the experimental evidence. The case study of L’Aquila district is discussed.« less

  2. Evaluating Amazon's Mechanical Turk as a Tool for Experimental Behavioral Research

    PubMed Central

    Crump, Matthew J. C.; McDonnell, John V.; Gureckis, Todd M.

    2013-01-01

    Amazon Mechanical Turk (AMT) is an online crowdsourcing service where anonymous online workers complete web-based tasks for small sums of money. The service has attracted attention from experimental psychologists interested in gathering human subject data more efficiently. However, relative to traditional laboratory studies, many aspects of the testing environment are not under the experimenter's control. In this paper, we attempt to empirically evaluate the fidelity of the AMT system for use in cognitive behavioral experiments. These types of experiment differ from simple surveys in that they require multiple trials, sustained attention from participants, comprehension of complex instructions, and millisecond accuracy for response recording and stimulus presentation. We replicate a diverse body of tasks from experimental psychology including the Stroop, Switching, Flanker, Simon, Posner Cuing, attentional blink, subliminal priming, and category learning tasks using participants recruited using AMT. While most of replications were qualitatively successful and validated the approach of collecting data anonymously online using a web-browser, others revealed disparity between laboratory results and online results. A number of important lessons were encountered in the process of conducting these replications that should be of value to other researchers. PMID:23516406

  3. Behavior of Industrial Steel Rack Connections

    NASA Astrophysics Data System (ADS)

    Shah, S. N. R.; Ramli Sulong, N. H.; Khan, R.; Jumaat, M. Z.; Shariati, M.

    2016-03-01

    Beam-to-column connections (BCCs) used in steel pallet racks (SPRs) play a significant role to maintain the stability of rack structures in the down-aisle direction. The variety in the geometry of commercially available beam end connectors hampers the development of a generalized analytic design approach for SPR BCCs. The experimental prediction of flexibility in SPR BCCs is prohibitively expensive and difficult for all types of commercially available beam end connectors. A suitable solution to derive a particular uniform M-θ relationship for each connection type in terms of geometric parameters may be achieved through finite element (FE) modeling. This study first presents a comprehensive description of the experimental investigations that were performed and used as the calibration bases for the numerical study that constituted its main contribution. A three dimensioned (3D) non-linear finite element (FE) model was developed and calibrated against the experimental results. The FE model took into account material nonlinearities, geometrical properties and large displacements. Comparisons between numerical and experimental data for observed failure modes and M-θ relationship showed close agreement. The validated FE model was further extended to perform parametric analysis to identify the effects of various parameters which may affect the overall performance of the connection.

  4. ARMOUR - A Rice miRNA: mRNA Interaction Resource.

    PubMed

    Sanan-Mishra, Neeti; Tripathi, Anita; Goswami, Kavita; Shukla, Rohit N; Vasudevan, Madavan; Goswami, Hitesh

    2018-01-01

    ARMOUR was developed as A Rice miRNA:mRNA interaction resource. This informative and interactive database includes the experimentally validated expression profiles of miRNAs under different developmental and abiotic stress conditions across seven Indian rice cultivars. This comprehensive database covers 689 known and 1664 predicted novel miRNAs and their expression profiles in more than 38 different tissues or conditions along with their predicted/known target transcripts. The understanding of miRNA:mRNA interactome in regulation of functional cellular machinery is supported by the sequence information of the mature and hairpin structures. ARMOUR provides flexibility to users in querying the database using multiple ways like known gene identifiers, gene ontology identifiers, KEGG identifiers and also allows on the fly fold change analysis and sequence search query with inbuilt BLAST algorithm. ARMOUR database provides a cohesive platform for novel and mature miRNAs and their expression in different experimental conditions and allows searching for their interacting mRNA targets, GO annotation and their involvement in various biological pathways. The ARMOUR database includes a provision for adding more experimental data from users, with an aim to develop it as a platform for sharing and comparing experimental data contributed by research groups working on rice.

  5. BiRen: predicting enhancers with a deep-learning-based model using the DNA sequence alone.

    PubMed

    Yang, Bite; Liu, Feng; Ren, Chao; Ouyang, Zhangyi; Xie, Ziwei; Bo, Xiaochen; Shu, Wenjie

    2017-07-01

    Enhancer elements are noncoding stretches of DNA that play key roles in controlling gene expression programmes. Despite major efforts to develop accurate enhancer prediction methods, identifying enhancer sequences continues to be a challenge in the annotation of mammalian genomes. One of the major issues is the lack of large, sufficiently comprehensive and experimentally validated enhancers for humans or other species. Thus, the development of computational methods based on limited experimentally validated enhancers and deciphering the transcriptional regulatory code encoded in the enhancer sequences is urgent. We present a deep-learning-based hybrid architecture, BiRen, which predicts enhancers using the DNA sequence alone. Our results demonstrate that BiRen can learn common enhancer patterns directly from the DNA sequence and exhibits superior accuracy, robustness and generalizability in enhancer prediction relative to other state-of-the-art enhancer predictors based on sequence characteristics. Our BiRen will enable researchers to acquire a deeper understanding of the regulatory code of enhancer sequences. Our BiRen method can be freely accessed at https://github.com/wenjiegroup/BiRen . shuwj@bmi.ac.cn or boxc@bmi.ac.cn. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  6. Agreeing on Validity Arguments

    ERIC Educational Resources Information Center

    Sireci, Stephen G.

    2013-01-01

    Kane (this issue) presents a comprehensive review of validity theory and reminds us that the focus of validation is on test score interpretations and use. In reacting to his article, I support the argument-based approach to validity and all of the major points regarding validation made by Dr. Kane. In addition, I call for a simpler, three-step…

  7. Development of functional oral health literacy assessment instruments: application of literacy and cognitive theories.

    PubMed

    Bridges, Susan M; Parthasarathy, Divya S; Au, Terry K F; Wong, Hai Ming; Yiu, Cynthia K Y; McGrath, Colman P

    2014-01-01

    This paper describes the development of a new literacy assessment instrument, the Hong Kong Oral Health Literacy Assessment Task for Paediatric Dentistry (HKOHLAT-P). Its relationship to literacy theory is analyzed to establish content and face validity. Implications for construct validity are examined by analyzing cognitive demand to determine how "comprehension" is measured. Key influences from literacy assessment were identified to analyze item development. Cognitive demand was analyzed using an established taxonomy. The HKOHLAT-P focuses on the functional domain of health literacy assessment. Items had strong content and face validity reflecting established principles from modern literacy theory. Inclusion of new text types signified relevant developments in the area of new literacies. Analysis of cognitive demand indicated that this instrument assesses the "comprehension" domain, specifically the areas of factual and procedural knowledge, with some assessment of conceptual knowledge. Metacognitive knowledge was not assessed. Comprehension tasks assessing patient health literacy predominantly examine functional health literacy at the lower levels of comprehension. Item development is influenced by the fields of situated and authentic literacy. Inclusion of content regarding multiliteracies is suggested for further research. Development of functional health literacy assessment instruments requires careful consideration of the clinical context in determining construct validity. © 2013 American Association of Public Health Dentistry.

  8. The Validity and reliability of the Comprehensive Home Environment Survey (CHES).

    PubMed

    Pinard, Courtney A; Yaroch, Amy L; Hart, Michael H; Serrano, Elena L; McFerren, Mary M; Estabrooks, Paul A

    2014-01-01

    Few comprehensive measures exist to assess contributors to childhood obesity within the home, specifically among low-income populations. The current study describes the modification and psychometric testing of the Comprehensive Home Environment Survey (CHES), an inclusive measure of the home food, physical activity, and media environment related to childhood obesity. The items were tested for content relevance by an expert panel and piloted in the priority population. The CHES was administered to low-income parents of children 5 to 17 years (N = 150), including a subsample of parents a second time and additional caregivers to establish test-retest and interrater reliabilities. Children older than 9 years (n = 95), as well as parents (N = 150) completed concurrent assessments of diet and physical activity behaviors (predictive validity). Analyses and item trimming resulted in 18 subscales and a total score, which displayed adequate internal consistency (α = .74-.92) and high test-retest reliability (r ≥ .73, ps < .01) and interrater reliability (r ≥ .42, ps < .01). The CHES score and a validated screener for the home environment were correlated (r = .37, p < .01; concurrent validity). CHES subscales were significantly correlated with behavioral measures (r = -.20-.55, p < .05; predictive validity). The CHES shows promise as a valid/reliable assessment of the home environment related to childhood obesity, including healthy diet and physical activity.

  9. Experimental investigation on the infrared refraction and extinction properties of rock dust in tunneling face of coal mine.

    PubMed

    Wang, Wenzheng; Wang, Yanming; Shi, Guoqing

    2015-12-10

    Comprehensive experimental research on the fundamental optical properties of dust pollution in a coal mine is presented. Rock dust generated in a tunneling roadway was sampled and the spectral complex refractive index within an infrared range of 2.5-25 μm was obtained by Fourier transform infrared spectroscopy measurement and Kramers-Kronig relation. Experimental results were validated to be consistent with equivalent optical constants simulated by effective medium theory based on component analysis of x-ray fluorescence, which illustrates that the top three mineral components are SiO2 (62.06%), Al2O3 (21.26%), and Fe2O3 (4.27%). The complex refractive index and the spatial distribution tested by a filter dust and particle size analyzer were involved in the simulation of extinction properties of rock dust along the tunneling roadway solved by the discrete ordinates method and Mie scattering model. The compared results illustrate that transmission is obviously enhanced with the increase of height from the floor but weakened with increasing horizontal distance from the air duct.

  10. Passageless Comprehension on the "Nelson-Denny Reading Test": Well above Chance for University Students

    ERIC Educational Resources Information Center

    Coleman, Chris; Lindstrom, Jennifer; Nelson, Jason; Lindstrom, William; Gregg, K. Noel

    2010-01-01

    The comprehension section of the "Nelson-Denny Reading Test" (NDRT) is widely used to assess the reading comprehension skills of adolescents and adults in the United States. In this study, the authors explored the content validity of the NDRT Comprehension Test (Forms G and H) by asking university students (with and without at-risk…

  11. Reading Comprehension Tests Vary in the Skills They Assess: Differential Dependence on Decoding and Oral Comprehension

    ERIC Educational Resources Information Center

    Keenan, Janice M.; Betjemann, Rebecca S.; Olson, Richard K.

    2008-01-01

    Comprehension tests are often used interchangeably, suggesting an implicit assumption that they are all measuring the same thing. We examine the validity of this assumption by comparing some of the most popular reading comprehension measures used in research and clinical practice in the United States: the Gray Oral Reading Test (GORT), the two…

  12. Evaluating a Brief Measure of Reading Comprehension for Narrative and Expository Text: The Convergent and Predictive Validity of the Reading Retell Rubric

    ERIC Educational Resources Information Center

    Thomas, Lisa B.

    2012-01-01

    Reading comprehension is a critical aspect of the reading process. Children who experience significant problems in reading comprehension are at risk for long-term academic and social problems. High-quality measures are needed for early, efficient, and effective identification of children in need of remediation in reading comprehension. Substantial…

  13. Comprehensive profiling of DNA repair defects in breast cancer identifies a novel class of endocrine therapy resistance drivers.

    PubMed

    Anurag, Meenakshi; Punturi, Nindo; Hoog, Jeremy; Bainbridge, Matthew N; Ellis, Matthew J; Haricharan, Svasti

    2018-05-23

    This study was undertaken to conduct a comprehensive investigation of the role of DNA damage repair (DDR) defects in poor outcome ER+ disease. Expression and mutational status of DDR genes in ER+ breast tumors were correlated with proliferative response in neoadjuvant aromatase inhibitor therapy trials (discovery data set), with outcomes in METABRIC, TCGA and Loi data sets (validation data sets), and in patient derived xenografts. A causal relationship between candidate DDR genes and endocrine treatment response, and the underlying mechanism, was then tested in ER+ breast cancer cell lines. Correlations between loss of expression of three genes: CETN2 (p<0.001) and ERCC1 (p=0.01) from the nucleotide excision repair (NER) and NEIL2 (p=0.04) from the base excision repair (BER) pathways were associated with endocrine treatment resistance in discovery data sets, and subsequently validated in independent patient cohorts. Complementary mutation analysis supported associations between mutations in NER and BER pathways and reduced endocrine treatment response. A causal role for CETN2, NEIL2 and ERCC1 loss in intrinsic endocrine resistance was experimentally validated in ER+ breast cancer cell lines, and in ER+ patient-derived xenograft models. Loss of CETN2, NEIL2 or ERCC1 induced endocrine treatment response by dysregulating G1/S transition, and therefore, increased sensitivity to CDK4/6 inhibitors. A combined DDR signature score was developed that predicted poor outcome in multiple patient cohorts. This report identifies DDR defects as a new class of endocrine treatment resistance drivers and indicates new avenues for predicting efficacy of CDK4/6 inhibition in the adjuvant treatment setting. Copyright ©2018, American Association for Cancer Research.

  14. Effectiveness of ethics education as perceived by nursing students: development and testing of a novel assessment instrument.

    PubMed

    Vynckier, Tine; Gastmans, Chris; Cannaerts, Nancy; de Casterlé, Bernadette Dierckx

    2015-05-01

    The effectiveness of ethics education continues to be disputed. No studies exist on how nursing students perceive the effectiveness of nursing ethics education in Flanders, Belgium. To develop a valid and reliable instrument, named the 'Students' Perceived Effectiveness of Ethics Education Scale' (SPEEES), to measure students' perceptions of the effectiveness of ethics education, and to conduct a pilot study in Flemish nursing students to investigate the perceived efficacy of nursing ethics education in Flanders. Content validity, comprehensibility and usability of the SPEEES were assessed. Reliability was assessed by means of a quantitative descriptive non-experimental pilot study. 86 third-year baccalaureate nursing students of two purposefully selected university colleges answered the SPEEES. Formal approval was given by the ethics committee. Informed consent was obtained and anonymity was ensured for both colleges and their participating students. The scale content validity index/Ave scores for the subscales were 1.00, 1.00 and 0.86. The comprehensibility and user-friendliness were favourable. Cronbach's alpha was 0.94 for general effectiveness, 0.89 for teaching methods and 0.85 for ethical content. Students perceived 'case study', 'lecture' and 'instructional dialogue' to be effective teaching methods and 'general ethical concepts' to contain effective content. 'Reflecting critically on their own values' was mentioned as the only ethical competence that, was promoted by the ethics courses. The study revealed rather large differences between both schools in students' perceptions of the contribution of ethics education to other ethical competences. The study revealed that according to the students, ethics courses failed to meet some basic objectives of ethics education. Although the SPEEES proved to be a valid and reliable measure, the pilot study suggests that there is still space for improvement and a need for larger scale research. Additional insights will enable educators to improve current nursing ethics education. © The Author(s) 2014.

  15. MoCha: Molecular Characterization of Unknown Pathways.

    PubMed

    Lobo, Daniel; Hammelman, Jennifer; Levin, Michael

    2016-04-01

    Automated methods for the reverse-engineering of complex regulatory networks are paving the way for the inference of mechanistic comprehensive models directly from experimental data. These novel methods can infer not only the relations and parameters of the known molecules defined in their input datasets, but also unknown components and pathways identified as necessary by the automated algorithms. Identifying the molecular nature of these unknown components is a crucial step for making testable predictions and experimentally validating the models, yet no specific and efficient tools exist to aid in this process. To this end, we present here MoCha (Molecular Characterization), a tool optimized for the search of unknown proteins and their pathways from a given set of known interacting proteins. MoCha uses the comprehensive dataset of protein-protein interactions provided by the STRING database, which currently includes more than a billion interactions from over 2,000 organisms. MoCha is highly optimized, performing typical searches within seconds. We demonstrate the use of MoCha with the characterization of unknown components from reverse-engineered models from the literature. MoCha is useful for working on network models by hand or as a downstream step of a model inference engine workflow and represents a valuable and efficient tool for the characterization of unknown pathways using known data from thousands of organisms. MoCha and its source code are freely available online under the GPLv3 license.

  16. Measuring Speech Comprehensibility in Students with Down Syndrome

    ERIC Educational Resources Information Center

    Yoder, Paul J.; Woynaroski, Tiffany; Camarata, Stephen

    2016-01-01

    Purpose: There is an ongoing need to develop assessments of spontaneous speech that focus on whether the child's utterances are comprehensible to listeners. This study sought to identify the attributes of a stable ratings-based measure of speech comprehensibility, which enabled examining the criterion-related validity of an orthography-based…

  17. Incremental and Predictive Utility of Formative Assessment Methods of Reading Comprehension

    ERIC Educational Resources Information Center

    Marcotte, Amanda M.; Hintze, John M.

    2009-01-01

    Formative assessment measures are commonly used in schools to assess reading and to design instruction accordingly. The purpose of this research was to investigate the incremental and concurrent validity of formative assessment measures of reading comprehension. It was hypothesized that formative measures of reading comprehension would contribute…

  18. Accuracy of the DIBELS Oral Reading Fluency Measure for Predicting Third Grade Reading Comprehension Outcomes

    ERIC Educational Resources Information Center

    Roehrig, Alysia D.; Petscher, Yaacov; Nettles, Stephen M.; Hudson, Roxanne F.; Torgesen, Joseph K.

    2008-01-01

    We evaluated the validity of DIBELS ("Dynamic Indicators of Basic Early Literacy Skills") ORF ("Oral Reading Fluency") for predicting performance on the "Florida Comprehensive Assessment Test" (FCAT-SSS) and "Stanford Achievement Test" (SAT-10) reading comprehension measures. The usefulness of previously…

  19. [The use of systematic review to develop a self-management program for CKD].

    PubMed

    Lee, Yu-Chin; Wu, Shu-Fang Vivienne; Lee, Mei-Chen; Chen, Fu-An; Yao, Yen-Hong; Wang, Chin-Ling

    2014-12-01

    Chronic kidney disease (CKD) has become a public health issue of international concern due to its high prevalence. The concept of self-management has been comprehensively applied in education programs that address chronic diseases. In recent years, many studies have used self-management programs in CKD interventions and have investigated the pre- and post-intervention physiological and psychological effectiveness of this approach. However, a complete clinical application program in the self-management model has yet to be developed for use in clinical renal care settings. A systematic review is used to develop a self-management program for CKD. Three implementation steps were used in this study. These steps include: (1) A systematic literature search and review using databases including CEPS (Chinese Electronic Periodical Services) of Airiti, National Digital Library of Theses and Dissertations in Taiwan, CINAHL, Pubmed, Medline, Cochrane Library, and Joanna Briggs Institute. A total of 22 studies were identified as valid and submitted to rigorous analysis. Of these, 4 were systematic literature reviews, 10 were randomized experimental studies, and 8 were non-randomized experimental studies. (2) Empirical evidence then was used to draft relevant guidelines on clinical application. (3) Finally, expert panels tested the validity of the draft to ensure the final version was valid for application in practice. This study designed a self-management program for CKD based on the findings of empirical studies. The content of this program included: design principles, categories, elements, and the intervention measures used in the self-management program. This program and then was assessed using the content validity index (CVI) and a four-point Liker's scale. The content validity score was .98. The guideline of self-management program to CKD was thus developed. This study developed a self-management program applicable to local care of CKD. It is hoped that the guidelines developed in this study offer a reference for clinical caregivers to improve their healthcare practices.

  20. Macro- and Micro-Validation: Beyond the "Five Sources" Framework for Classifying Validation Evidence and Analysis

    ERIC Educational Resources Information Center

    Newton, Paul E.

    2016-01-01

    This paper argues that the dominant framework for conceptualizing validation evidence and analysis--the "five sources" framework from the 1999 "Standards"--is seriously limited. Its limitation raises a significant barrier to understanding the nature of comprehensive validation, and this presents a significant threat to…

  1. Validation of the Activities of Community Transportation model for individuals with cognitive impairments.

    PubMed

    Sohlberg, McKay Moore; Fickas, Stephen; Lemoncello, Rik; Hung, Pei-Fang

    2009-01-01

    To develop a theoretical, functional model of community navigation for individuals with cognitive impairments: the Activities of Community Transportation (ACTs). Iterative design using qualitative methods (i.e. document review, focus groups and observations). Four agencies providing travel training to adults with cognitive impairments in the USA participated in the validation study. A thorough document review and series of focus groups led to the development of a comprehensive model (ACTs Wheels) delineating the requisite steps and skills for community navigation. The model was validated and updated based on observations of 395 actual trips by travellers with navigational challenges from the four participating agencies. Results revealed that the 'ACTs Wheel' models were complete and comprehensive. The 'ACTs Wheels' represent a comprehensive model of the steps needed to navigate to destinations using paratransit and fixed-route public transportation systems for travellers with cognitive impairments. Suggestions are made for future investigations of community transportation for this population.

  2. You don't have to believe everything you read: background knowledge permits fast and efficient validation of information.

    PubMed

    Richter, Tobias; Schroeder, Sascha; Wöhrmann, Britta

    2009-03-01

    In social cognition, knowledge-based validation of information is usually regarded as relying on strategic and resource-demanding processes. Research on language comprehension, in contrast, suggests that validation processes are involved in the construction of a referential representation of the communicated information. This view implies that individuals can use their knowledge to validate incoming information in a routine and efficient manner. Consistent with this idea, Experiments 1 and 2 demonstrated that individuals are able to reject false assertions efficiently when they have validity-relevant beliefs. Validation processes were carried out routinely even when individuals were put under additional cognitive load during comprehension. Experiment 3 demonstrated that the rejection of false information occurs automatically and interferes with affirmative responses in a nonsemantic task (epistemic Stroop effect). Experiment 4 also revealed complementary interference effects of true information with negative responses in a nonsemantic task. These results suggest the existence of fast and efficient validation processes that protect mental representations from being contaminated by false and inaccurate information.

  3. Development of Diagnostic Biomarkers for Detecting Diabetic Retinopathy at Early Stages Using Quantitative Proteomics

    PubMed Central

    Min, Hophil; Kim, Sang Jin; Oh, Sohee; Kim, Kyunggon; Yu, Hyeong Gon; Park, Taesung; Kim, Youngsoo

    2016-01-01

    Diabetic retinopathy (DR) is a common microvascular complication caused by diabetes mellitus (DM) and is a leading cause of vision impairment and loss among adults. Here, we performed a comprehensive proteomic analysis to discover biomarkers for DR. First, to identify biomarker candidates that are specifically expressed in human vitreous, we performed data-mining on both previously published DR-related studies and our experimental data; 96 proteins were then selected. To confirm and validate the selected biomarker candidates, candidates were selected, confirmed, and validated using plasma from diabetic patients without DR (No DR) and diabetics with mild or moderate nonproliferative diabetic retinopathy (Mi or Mo NPDR) using semiquantitative multiple reaction monitoring (SQ-MRM) and stable-isotope dilution multiple reaction monitoring (SID-MRM). Additionally, we performed a multiplex assay using 15 biomarker candidates identified in the SID-MRM analysis, which resulted in merged AUC values of 0.99 (No DR versus Mo NPDR) and 0.93 (No DR versus Mi and Mo NPDR). Although further validation with a larger sample size is needed, the 4-protein marker panel (APO4, C7, CLU, and ITIH2) could represent a useful multibiomarker model for detecting the early stages of DR. PMID:26665153

  4. Diagnostic Criteria for Temporomandibular Disorders (DC/TMD) for Clinical and Research Applications: Recommendations of the International RDC/TMD Consortium Network* and Orofacial Pain Special Interest Group†

    PubMed Central

    Schiffman, Eric; Ohrbach, Richard; Truelove, Edmond; Look, John; Anderson, Gary; Goulet, Jean-Paul; List, Thomas; Svensson, Peter; Gonzalez, Yoly; Lobbezoo, Frank; Michelotti, Ambra; Brooks, Sharon L.; Ceusters, Werner; Drangsholt, Mark; Ettlin, Dominik; Gaul, Charly; Goldberg, Louis J.; Haythornthwaite, Jennifer A.; Hollender, Lars; Jensen, Rigmor; John, Mike T.; De Laat, Antoon; de Leeuw, Reny; Maixner, William; van der Meulen, Marylee; Murray, Greg M.; Nixdorf, Donald R.; Palla, Sandro; Petersson, Arne; Pionchon, Paul; Smith, Barry; Visscher, Corine M.; Zakrzewska, Joanna; Dworkin, Samuel F.

    2015-01-01

    Aims The original Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Axis I diagnostic algorithms have been demonstrated to be reliable. However, the Validation Project determined that the RDC/TMD Axis I validity was below the target sensitivity of ≥ 0.70 and specificity of ≥ 0.95. Consequently, these empirical results supported the development of revised RDC/TMD Axis I diagnostic algorithms that were subsequently demonstrated to be valid for the most common pain-related TMD and for one temporomandibular joint (TMJ) intra-articular disorder. The original RDC/TMD Axis II instruments were shown to be both reliable and valid. Working from these findings and revisions, two international consensus workshops were convened, from which recommendations were obtained for the finalization of new Axis I diagnostic algorithms and new Axis II instruments. Methods Through a series of workshops and symposia, a panel of clinical and basic science pain experts modified the revised RDC/TMD Axis I algorithms by using comprehensive searches of published TMD diagnostic literature followed by review and consensus via a formal structured process. The panel's recommendations for further revision of the Axis I diagnostic algorithms were assessed for validity by using the Validation Project's data set, and for reliability by using newly collected data from the ongoing TMJ Impact Project—the follow-up study to the Validation Project. New Axis II instruments were identified through a comprehensive search of the literature providing valid instruments that, relative to the RDC/TMD, are shorter in length, are available in the public domain, and currently are being used in medical settings. Results The newly recommended Diagnostic Criteria for TMD (DC/TMD) Axis I protocol includes both a valid screener for detecting any pain-related TMD as well as valid diagnostic criteria for differentiating the most common pain-related TMD (sensitivity ≥ 0.86, specificity ≥ 0.98) and for one intra-articular disorder (sensitivity of 0.80 and specificity of 0.97). Diagnostic criteria for other common intra-articular disorders lack adequate validity for clinical diagnoses but can be used for screening purposes. Inter-examiner reliability for the clinical assessment associated with the validated DC/TMD criteria for pain-related TMD is excellent (kappa ≥ 0.85). Finally, a comprehensive classification system that includes both the common and less common TMD is also presented. The Axis II protocol retains selected original RDC/TMD screening instruments augmented with new instruments to assess jaw function as well as behavioral and additional psychosocial factors. The Axis II protocol is divided into screening and comprehensive self-report instrument sets. The screening instruments’ 41 questions assess pain intensity, pain-related disability, psychological distress, jaw functional limitations, and parafunctional behaviors, and a pain drawing is used to assess locations of pain. The comprehensive instruments, composed of 81 questions, assess in further detail jaw functional limitations and psychological distress as well as additional constructs of anxiety and presence of comorbid pain conditions. Conclusion The recommended evidence-based new DC/TMD protocol is appropriate for use in both clinical and research settings. More comprehensive instruments augment short and simple screening instruments for Axis I and Axis II. These validated instruments allow for identification of patients with a range of simple to complex TMD presentations. PMID:24482784

  5. Psychometric Research in Reading.

    ERIC Educational Resources Information Center

    Davis, Frederick B.

    This review of psychometric research in reading analyzes the factors which seem related to reading comprehension skills. Experimental analysis of reading comprehension by L. E. Thorndike revealed two major components: knowledge of word meanings and verbal reasoning abilities. Subsequent analysis of experimental studies of reading comprehension…

  6. Effect of attention therapy on reading comprehension.

    PubMed

    Solan, Harold A; Shelley-Tremblay, John; Ficarra, Anthony; Silverman, Michael; Larson, Steven

    2003-01-01

    This study quantified the influence of visual attention therapy on the reading comprehension of Grade 6 children with moderate reading disabilities (RD) in the absence of specific reading remediation. Thirty students with below-average reading scores were identified using standardized reading comprehension tests. Fifteen children were placed randomly in the experimental group and 15 in the control group. The Attention Battery of the Cognitive Assessment System was administered to all participants. The experimental group received 12 one-hour sessions of individually monitored, computer-based attention therapy programs; the control group received no therapy during their 12-week period. Each group was retested on attention and reading comprehension measures. In order to stimulate selective and sustained visual attention, the vision therapy stressed various aspects of arousal, activation, and vigilance. At the completion of attention therapy, the mean standard attention and reading comprehension scores of the experimental group had improved significantly. The control group, however, showed no significant improvement in reading comprehension scores after 12 weeks. Although uncertainties still exist, this investigation supports the notion that visual attention is malleable and that attention therapy has a significant effect on reading comprehension in this often neglected population.

  7. Bio-Optical Measurement and Modeling of the California Current and Polar Oceans. Chapter 13

    NASA Technical Reports Server (NTRS)

    Mitchell, B. Greg

    2001-01-01

    This Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) project contract supports in situ ocean optical observations in the California Current, Southern Ocean, Indian Ocean as well as merger of other in situ data sets we have collected on various global cruises supported by separate grants or contracts. The principal goals of our research are to validate standard or experimental products through detailed bio-optical and biogeochemical measurements, and to combine ocean optical observations with advanced radiative transfer modeling to contribute to satellite vicarious radiometric calibration and advanced algorithm development. In collaboration with major oceanographic ship-based observation programs funded by various agencies (CalCOFI, US JGOFS, NOAA AMLR, INDOEX and Japan/East Sea) our SIMBIOS effort has resulted in data from diverse bio-optical provinces. For these global deployments we generate a high-quality, methodologically consistent, data set encompassing a wide-range of oceanic conditions. Global data collected in recent years have been integrated with our on-going CalCOFI database and have been used to evaluate Sea-Viewing Wide Field-of-view Sensor (SeaWiFS) algorithms and to carry out validation studies. The combined database we have assembled now comprises more than 700 stations and includes observations for the clearest oligotrophic waters, highly eutrophic blooms, red-tides and coastal case two conditions. The data has been used to validate water-leaving radiance estimated with SeaWiFS as well as bio optical algorithms for chlorophyll pigments. The comprehensive data is utilized for development of experimental algorithms (e.g., high-low latitude pigment transition, phytoplankton absorption, and cDOM).

  8. Content Validity of the Comprehensive ICF Core Set for Multiple Sclerosis from the Perspective of Speech and Language Therapists

    ERIC Educational Resources Information Center

    Renom, Marta; Conrad, Andrea; Bascuñana, Helena; Cieza, Alarcos; Galán, Ingrid; Kesselring, Jürg; Coenen, Michaela

    2014-01-01

    Background: The Comprehensive International Classification of Functioning, Disability and Health (ICF) Core Set for Multiple Sclerosis (MS) is a comprehensive framework to structure the information obtained in multidisciplinary clinical settings according to the biopsychosocial perspective of the International Classification of Functioning,…

  9. Validation of the Simple View of Reading in Hebrew--A Semitic Language

    ERIC Educational Resources Information Center

    Joshi, R. Malatesha; Ji, Xuejun Ryan; Breznitz, Zvia; Amiel, Meirav; Yulia, Astri

    2015-01-01

    The Simple View of Reading (SVR) in Hebrew was tested by administering decoding, listening comprehension, and reading comprehension measures to 1,002 students from Grades 2 to 10 in the northern part of Israel. Results from hierarchical regression analyses supported the SVR in Hebrew with decoding and listening comprehension measures explaining…

  10. Evaluation, Modification, and Validation of Pictograms Depicting Medication Instructions in the Elderly.

    PubMed

    Berthenet, Marion; Vaillancourt, Régis; Pouliot, Annie

    2016-01-01

    Poor health literacy has been recognized as a limiting factor in the elderly's ability to comprehend written or verbal medication information and also to successfully adhere to medical regimens. The objective of this study was to validate a set of pictograms depicting medication instructions for use among the elderly to support health literacy. Elderly outpatients were recruited in 3 community pharmacies in Canada. One-on-one structured interviews were conducted to assess comprehension of 76 pictograms from the International Pharmaceutical Federation. Comprehension was assessed using transparency testing and pictogram translucency, or the degree to which the pictogram represents the intended message. A total of 135 participants were enrolled in this study, and 76 pictograms were assessed. A total of 50 pictograms achieved more than 67% comprehension. Pictograms depicting precautions and warnings against certain side effects were generally not well understood. Gender, age, and education level all had a significant impact on the interpretation scores of certain individual pictograms. When all pictograms were included, younger males had a significantly higher comprehension score than older females, and participants with a higher level of education provided significantly higher translucency scores. Even when pictograms reached the comprehension threshold set by the International Organization for Standardization in the general populations, only 50 of these pictograms achieved more than 67% comprehension among the elderly, confirming that validation in this subpopulation should be conducted prior to using specific pictograms. Accompanying pictograms with education about these pictograms and important counseling points remains extremely important.

  11. The development of a multimedia online language assessment tool for young children with autism.

    PubMed

    Lin, Chu-Sui; Chang, Shu-Hui; Liou, Wen-Ying; Tsai, Yu-Show

    2013-10-01

    This study aimed to provide early childhood special education professionals with a standardized and comprehensive language assessment tool for the early identification of language learning characteristics (e.g., hyperlexia) of young children with autism. In this study, we used computer technology to develop a multi-media online language assessment tool that presents auditory or visual stimuli. This online comprehensive language assessment consists of six subtests: decoding, homographs, auditory vocabulary comprehension, visual vocabulary comprehension, auditory sentence comprehension, and visual sentence comprehension. Three hundred typically developing children and 35 children with autism from Tao-Yuan County in Taiwan aged 4-6 participated in this study. The Cronbach α values of the six subtests ranged from .64 to .97. The variance explained by the six subtests ranged from 14% to 56%, the current validity of each subtest with the Peabody Picture Vocabulary Test-Revised ranged from .21 to .45, and the predictive validity of each subtest with WISC-III ranged from .47 to .75. This assessment tool was also found to be able to accurately differentiate children with autism up to 92%. These results indicate that this assessment tool has both adequate reliability and validity. Additionally, 35 children with autism have completed the entire assessment in this study without exhibiting any extremely troubling behaviors. However, future research is needed to increase the sample size of both typically developing children and young children with autism and to overcome the technical challenges associated with internet issues. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. L'Apport des Faits Phonetiques au Developpement de la Comprehension Auditive en Langue Seconde (The Influence of Phonetic Skills on the Development of Listening Comprehension in a Second Language).

    ERIC Educational Resources Information Center

    Champagne-Muzar, Cecile

    1996-01-01

    Ascertains the influence of the development of receptive phonetic skills on the level of listening comprehension of adults learning French as a second language in a formal setting. Test results indicate substantial gains in phonetics by the experimental group and a significant difference between the performance of experimental and control groups.…

  13. The Well-Being 5: Development and Validation of a Diagnostic Instrument to Improve Population Well-being

    PubMed Central

    Sears, Lindsay E.; Agrawal, Sangeeta; Sidney, James A.; Castle, Patricia H.; Coberley, Carter R.; Witters, Dan; Pope, James E.; Harter, James K.

    2014-01-01

    Abstract Building upon extensive research from 2 validated well-being instruments, the objective of this research was to develop and validate a comprehensive and actionable well-being instrument that informs and facilitates improvement of well-being for individuals, communities, and nations. The goals of the measure were comprehensiveness, validity and reliability, significant relationships with health and performance outcomes, and diagnostic capability for intervention. For measure development and validation, questions from the Well-being Assessment and Wellbeing Finder were simultaneously administered as a test item pool to over 13,000 individuals across 3 independent samples. Exploratory factor analysis was conducted on a random selection from the first sample and confirmed in the other samples. Further evidence of validity was established through correlations to the established well-being scores from the Well-Being Assessment and Wellbeing Finder, and individual outcomes capturing health care utilization and productivity. Results showed the Well-Being 5 score comprehensively captures the known constructs within well-being, demonstrates good reliability and validity, significantly relates to health and performance outcomes, is diagnostic and informative for intervention, and can track and compare well-being over time and across groups. With this tool, well-being deficiencies within a population can be effectively identified, prioritized, and addressed, yielding the potential for substantial improvements to the health status, performance, and quality of life for individuals and cost savings for stakeholders. (Population Health Management 2014;17:357–365) PMID:24892873

  14. [Reliability and validity of the Chinese version on Comprehensive Scores for Financial Toxicity based on the patient-reported outcome measures].

    PubMed

    Yu, H H; Bi, X; Liu, Y Y

    2017-08-10

    Objective: To evaluate the reliability and validity of the Chinese version on comprehensive scores for financial toxicity (COST), based on the patient-reported outcome measures. Methods: A total of 118 cancer patients were face-to-face interviewed by well-trained investigators. Cronbach's α and Pearson correlation coefficient were used to evaluate reliability. Content validity index (CVI) and exploratory factor analysis (EFA) were used to evaluate the content validity and construct validity, respectively. Results: The Cronbach's α coefficient appeared as 0.889 for the whole questionnaire, with the results of test-retest were between 0.77 and 0.98. Scale-content validity index (S-CVI) appeared as 0.82, with item-content validity index (I-CVI) between 0.83 and 1.00. Two components were extracted from the Exploratory factor analysis, with cumulative rate as 68.04% and loading>0.60 on every item. Conclusion: The Chinese version of COST scale showed high reliability and good validity, thus can be applied to assess the financial situation in cancer patients.

  15. Emotional state talk and emotion understanding: a training study with preschool children.

    PubMed

    Gavazzi, Ilaria Grazzani; Ornaghi, Veronica

    2011-11-01

    ABSTRACTThe present study investigates whether training preschool children in the active use of emotional state talk plays a significant role in bringing about greater understanding of emotion terms and improved emotion comprehension. Participants were 100 preschool children (M=52 months; SD=9·9; range: 35-70 months), randomly assigned to experimental or control conditions. They were pre- and post-tested to assess their language comprehension, metacognitive language comprehension and emotion understanding. Analyses of pre-test data did not show any significant differences between experimental and control groups. During the intervention phase, the children were read stories enriched with emotional lexicon. After listening to the stories, children in the experimental group took part in conversational language games designed to stimulate use of the selected emotional terms. In contrast, the control group children did not take part in any special linguistic activities after the story readings. Analyses revealed that the experimental group outperformed the control group in the understanding of inner state language and in the comprehension of emotion.

  16. Cross-cultural adaptation and validation of the Korean Toronto Extremity Salvage Score for extremity sarcoma.

    PubMed

    Kim, Han-Soo; Yun, JiYeon; Kang, Seungcheol; Han, Ilkyu

    2015-07-01

    A Korean version of Toronto Extremity Salvage Score (TESS), a widely used disease-specific patient-reported questionnaire for assessing physical function of sarcoma patients, has not been developed. 1) to translate and cross-culturally adapt the TESS into Korean, and 2) to examine its comprehensibility, reliability and validity. TESS was translated into Korean, then translated back into English, and reviewed by a committee to develop the consensus version of the Korean TESS. The Korean TESS was administered to 126 patients to examine its comprehensibility, reliability, and validity. Comprehensibility was high, as the patients rated questions as "easy" or "very easy" in 96% for the TESS lower extremity (LE) and in 97% for the TESS upper extremity (UE). Test-retest reliability with intraclass coefficient (0.874 for LE and 0.979 for UE) and internal consistency with Cronbach's alpha (0.978 for LE and 0.989 for UE) were excellent. Korean TESS correlated with the MSTS score (r = 0.772 for LE and r = 0.635 for UE), and physical functioning domain of EORTC-CLQ C30 (r = 0.840 for LE and r = 0.630 for UE). Our study suggests that Korean version of the TESS is a comprehensible, reliable, and valid instrument to measure patient-reported functional outcome in patients with extremity sarcoma. © 2015 Wiley Periodicals, Inc.

  17. 40 CFR 761.386 - Required experimental conditions for the validation study and subsequent use during decontamination.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... experimental conditions for the validation study and subsequent use during decontamination. The following experimental conditions apply for any solvent: (a) Temperature and pressure. Conduct the validation study and...

  18. 40 CFR 761.386 - Required experimental conditions for the validation study and subsequent use during decontamination.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... experimental conditions for the validation study and subsequent use during decontamination. The following experimental conditions apply for any solvent: (a) Temperature and pressure. Conduct the validation study and...

  19. 40 CFR 761.386 - Required experimental conditions for the validation study and subsequent use during decontamination.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... experimental conditions for the validation study and subsequent use during decontamination. The following experimental conditions apply for any solvent: (a) Temperature and pressure. Conduct the validation study and...

  20. Thumb-loops up for catalysis: a structure/function investigation of a functional loop movement in a GH11 xylanase

    PubMed Central

    Paës, Gabriel; Cortés, Juan; Siméon, Thierry; O'Donohue, Michael J.; Tran, Vinh

    2012-01-01

    Dynamics is a key feature of enzyme catalysis. Unfortunately, current experimental and computational techniques do not yet provide a comprehensive understanding and description of functional macromolecular motions. In this work, we have extended a novel computational technique, which combines molecular modeling methods and robotics algorithms, to investigate functional motions of protein loops. This new approach has been applied to study the functional importance of the so-called thumb-loop in the glycoside hydrolase family 11 xylanase from Thermobacillus xylanilyticus (Tx-xyl). The results obtained provide new insight into the role of the loop in the glycosylation/deglycosylation catalytic cycle, and underline the key importance of the nature of the residue located at the tip of the thumb-loop. The effect of mutations predicted in silico has been validated by in vitro site-directed mutagenesis experiments. Overall, we propose a comprehensive model of Tx-xyl catalysis in terms of substrate and product dynamics by identifying the action of the thumb-loop motion during catalysis. PMID:24688637

  1. Effects of Classroom Practices on Reading Comprehension, Engagement, and Motivations for Adolescents

    PubMed Central

    Guthrie, John T.; Klauda, Susan Lutz

    2014-01-01

    We investigated the roles of classroom supports for multiple motivations and engagement in students’ informational text comprehension, motivation, and engagement. A composite of classroom contextual variables consisting of instructional support for choice, importance, collaboration, and competence, accompanied by cognitive scaffolding for informational text comprehension, was provided in four-week instructional units for 615 grade 7 students. These classroom motivational-engagement supports were implemented within integrated literacy/history instruction in the Concept-Oriented Reading Instruction (CORI) framework. CORI increased informational text comprehension compared with traditional instruction (TI) in a switching replications experimental design. Students’ perceptions of the motivational-engagement supports were associated with increases in students’ intrinsic motivation, value, perceived competence, and increased positive engagement (dedication) more markedly in CORI than in TI, according to multiple regression analyses. Results extended the evidence for the effectiveness of CORI to literacy/history subject matter and informational text comprehension among middle school students. The experimental effects in classroom contexts confirmed effects from task-specific, situated experimental studies in the literature. PMID:25506087

  2. Effects of Classroom Practices on Reading Comprehension, Engagement, and Motivations for Adolescents.

    PubMed

    Guthrie, John T; Klauda, Susan Lutz

    2014-10-01

    We investigated the roles of classroom supports for multiple motivations and engagement in students' informational text comprehension, motivation, and engagement. A composite of classroom contextual variables consisting of instructional support for choice, importance, collaboration, and competence, accompanied by cognitive scaffolding for informational text comprehension, was provided in four-week instructional units for 615 grade 7 students. These classroom motivational-engagement supports were implemented within integrated literacy/history instruction in the Concept-Oriented Reading Instruction (CORI) framework. CORI increased informational text comprehension compared with traditional instruction (TI) in a switching replications experimental design. Students' perceptions of the motivational-engagement supports were associated with increases in students' intrinsic motivation, value, perceived competence, and increased positive engagement (dedication) more markedly in CORI than in TI, according to multiple regression analyses. Results extended the evidence for the effectiveness of CORI to literacy/history subject matter and informational text comprehension among middle school students. The experimental effects in classroom contexts confirmed effects from task-specific, situated experimental studies in the literature.

  3. Comprehensive mathematical model of oxidative phosphorylation valid for physiological and pathological conditions.

    PubMed

    Heiske, Margit; Letellier, Thierry; Klipp, Edda

    2017-09-01

    We developed a mathematical model of oxidative phosphorylation (OXPHOS) that allows for a precise description of mitochondrial function with respect to the respiratory flux and the ATP production. The model reproduced flux-force relationships under various experimental conditions (state 3 and 4, uncoupling, and shortage of respiratory substrate) as well as time courses, exhibiting correct P/O ratios. The model was able to reproduce experimental threshold curves for perturbations of the respiratory chain complexes, the F 1 F 0 -ATP synthase, the ADP/ATP carrier, the phosphate/OH carrier, and the proton leak. Thus, the model is well suited to study complex interactions within the OXPHOS system, especially with respect to physiological adaptations or pathological modifications, influencing substrate and product affinities or maximal catalytic rates. Moreover, it could be a useful tool to study the role of OXPHOS and its capacity to compensate or enhance physiopathologies of the mitochondrial and cellular energy metabolism. © 2017 Federation of European Biochemical Societies.

  4. Reconciled rat and human metabolic networks for comparative toxicogenomics and biomarker predictions

    PubMed Central

    Blais, Edik M.; Rawls, Kristopher D.; Dougherty, Bonnie V.; Li, Zhuo I.; Kolling, Glynis L.; Ye, Ping; Wallqvist, Anders; Papin, Jason A.

    2017-01-01

    The laboratory rat has been used as a surrogate to study human biology for more than a century. Here we present the first genome-scale network reconstruction of Rattus norvegicus metabolism, iRno, and a significantly improved reconstruction of human metabolism, iHsa. These curated models comprehensively capture metabolic features known to distinguish rats from humans including vitamin C and bile acid synthesis pathways. After reconciling network differences between iRno and iHsa, we integrate toxicogenomics data from rat and human hepatocytes, to generate biomarker predictions in response to 76 drugs. We validate comparative predictions for xanthine derivatives with new experimental data and literature-based evidence delineating metabolite biomarkers unique to humans. Our results provide mechanistic insights into species-specific metabolism and facilitate the selection of biomarkers consistent with rat and human biology. These models can serve as powerful computational platforms for contextualizing experimental data and making functional predictions for clinical and basic science applications. PMID:28176778

  5. Experiment for validation of fluid-structure interaction models and algorithms.

    PubMed

    Hessenthaler, A; Gaddum, N R; Holub, O; Sinkus, R; Röhrle, O; Nordsletten, D

    2017-09-01

    In this paper a fluid-structure interaction (FSI) experiment is presented. The aim of this experiment is to provide a challenging yet easy-to-setup FSI test case that addresses the need for rigorous testing of FSI algorithms and modeling frameworks. Steady-state and periodic steady-state test cases with constant and periodic inflow were established. Focus of the experiment is on biomedical engineering applications with flow being in the laminar regime with Reynolds numbers 1283 and 651. Flow and solid domains were defined using computer-aided design (CAD) tools. The experimental design aimed at providing a straightforward boundary condition definition. Material parameters and mechanical response of a moderately viscous Newtonian fluid and a nonlinear incompressible solid were experimentally determined. A comprehensive data set was acquired by using magnetic resonance imaging to record the interaction between the fluid and the solid, quantifying flow and solid motion. Copyright © 2016 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons Ltd.

  6. Nonlinear elasticity in rocks: A comprehensive three-dimensional description

    DOE PAGES

    Lott, Martin; Remillieux, Marcel; Garnier, Vincent; ...

    2017-07-17

    Here we study theoretically and experimentally the mechanisms of nonlinear and nonequilibrium dynamics in geomaterials through dynamic acoustoelasticity testing. In the proposed theoretical formulation, the classical theory of nonlinear elasticity is extended to include the effects of conditioning. This formulation is adapted to the context of dynamic acoustoelasticity testing in which a low-frequency “pump” wave induces a strain field in the sample and modulates the propagation of a high-frequency “probe” wave. Experiments are conducted to validate the formulation in a long thin bar of Berea sandstone. Several configurations of the pump and probe are examined: the pump successively consists ofmore » the first longitudinal and first torsional mode of vibration of the sample while the probe is successively based on (pressure) $P$ and (shear) $S$ waves. The theoretical predictions reproduce many features of the elastic response observed experimentally, in particular, the coupling between nonlinear and nonequilibrium dynamics and the three-dimensional effects resulting from the tensorial nature of elasticity.« less

  7. Effects of an intervention in active strategies for text comprehension and recall.

    PubMed

    Elosúa, M Rosa; García-Madruga, Juan A; Gutiérrez, Francisco; Luque, Juan Luis; Gárate, Milagros

    2002-11-01

    The aim of this study was to investigate the effects of an intervention program to promote active text-processing strategies (main-idea identification and summarization) at two developmental levels (12- and 16-year-olds). The independent variables were training condition (experimental and control) and school level (7th and 10th grades). Several measures were taken as dependent variables: reading span, reading time, construction of macrostructure, and structural recall. The hypothesis claimed that training would increase comprehension and recall significantly. Furthermore, as a result of the training program, a reduction in developmental differences in the experimental groups at posttest was also expected. Results supported the predictions, showing a significant improvement in the experimental groups' reading comprehension and recall. These results are discussed in terms of the importance of active and self-controlled strategies for text comprehension and recall.

  8. Validation and augmentation of Inrix arterial travel time data using independent sources : [research summary].

    DOT National Transportation Integrated Search

    2015-02-01

    Although the freeway travel time data has been validated extensively in recent : years, the quality of arterial travel time data is not well known. This project : presents a comprehensive validation scheme for arterial travel time data based : on GPS...

  9. Experimental validation of a sub-surface model of solar power for distributed marine sensor systems

    NASA Astrophysics Data System (ADS)

    Hahn, Gregory G.; Cantin, Heather P.; Shafer, Michael W.

    2016-04-01

    The capabilities of distributed sensor systems such as marine wildlife telemetry tags could be significantly enhanced through the integration of photovoltaic modules. Photovoltaic cells could be used to supplement the primary batteries for wildlife telemetry tags to allow for extended tag deployments, wherein larger amounts of data could be collected and transmitted in near real time. In this article, we present experimental results used to validate and improve key aspects of our original model for sub-surface solar power. We discuss the test methods and results, comparing analytic predictions to experimental results. In a previous work, we introduced a model for sub-surface solar power that used analytic models and empirical data to predict the solar irradiance available for harvest at any depth under the ocean's surface over the course of a year. This model presented underwater photovoltaic transduction as a viable means of supplementing energy for marine wildlife telemetry tags. The additional data provided by improvements in daily energy budgets would enhance the temporal and spatial comprehension of the host's activities and/or environments. Photovoltaic transduction is one method that has not been widely deployed in the sub-surface marine environments despite widespread use on terrestrial and avian species wildlife tag systems. Until now, the use of photovoltaic cells for underwater energy harvesting has generally been disregarded as a viable energy source in this arena. In addition to marine telemetry systems, photovoltaic energy harvesting systems could also serve as a means of energy supply for autonomous underwater vehicles (AUVs), as well as submersible buoys for oceanographic data collection.

  10. CANEapp: a user-friendly application for automated next generation transcriptomic data analysis.

    PubMed

    Velmeshev, Dmitry; Lally, Patrick; Magistri, Marco; Faghihi, Mohammad Ali

    2016-01-13

    Next generation sequencing (NGS) technologies are indispensable for molecular biology research, but data analysis represents the bottleneck in their application. Users need to be familiar with computer terminal commands, the Linux environment, and various software tools and scripts. Analysis workflows have to be optimized and experimentally validated to extract biologically meaningful data. Moreover, as larger datasets are being generated, their analysis requires use of high-performance servers. To address these needs, we developed CANEapp (application for Comprehensive automated Analysis of Next-generation sequencing Experiments), a unique suite that combines a Graphical User Interface (GUI) and an automated server-side analysis pipeline that is platform-independent, making it suitable for any server architecture. The GUI runs on a PC or Mac and seamlessly connects to the server to provide full GUI control of RNA-sequencing (RNA-seq) project analysis. The server-side analysis pipeline contains a framework that is implemented on a Linux server through completely automated installation of software components and reference files. Analysis with CANEapp is also fully automated and performs differential gene expression analysis and novel noncoding RNA discovery through alternative workflows (Cuffdiff and R packages edgeR and DESeq2). We compared CANEapp to other similar tools, and it significantly improves on previous developments. We experimentally validated CANEapp's performance by applying it to data derived from different experimental paradigms and confirming the results with quantitative real-time PCR (qRT-PCR). CANEapp adapts to any server architecture by effectively using available resources and thus handles large amounts of data efficiently. CANEapp performance has been experimentally validated on various biological datasets. CANEapp is available free of charge at http://psychiatry.med.miami.edu/research/laboratory-of-translational-rna-genomics/CANE-app . We believe that CANEapp will serve both biologists with no computational experience and bioinformaticians as a simple, timesaving but accurate and powerful tool to analyze large RNA-seq datasets and will provide foundations for future development of integrated and automated high-throughput genomics data analysis tools. Due to its inherently standardized pipeline and combination of automated analysis and platform-independence, CANEapp is an ideal for large-scale collaborative RNA-seq projects between different institutions and research groups.

  11. DE-NE0008277_PROTEUS final technical report 2018

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enqvist, Andreas

    This project details re-evaluations of experiments of gas-cooled fast reactor (GCFR) core designs performed in the 1970s at the PROTEUS reactor and create a series of International Reactor Physics Experiment Evaluation Project (IRPhEP) benchmarks. Currently there are no gas-cooled fast reactor (GCFR) experiments available in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). These experiments are excellent candidates for reanalysis and development of multiple benchmarks because these experiments provide high-quality integral nuclear data relevant to the validation and refinement of thorium, neptunium, uranium, plutonium, iron, and graphite cross sections. It would be cost prohibitive to reproduce suchmore » a comprehensive suite of experimental data to support any future GCFR endeavors.« less

  12. Beyond Screening and Progress Monitoring: An Examination of the Reliability and Concurrent Validity of Maze Comprehension Assessments for Fourth-Grade Students

    ERIC Educational Resources Information Center

    Brasher, Casey F.

    2017-01-01

    Reading comprehension assessments often lack instructional utility because they do not accurately pinpoint why a student has difficulty. The varying formats, directions, and response requirements of comprehension assessments lead to differential measurement of underlying skills and contribute to noted amounts of unshared variance among tests. Maze…

  13. Development and Validity of the Rating Scales of Academic Skills for Reading Comprehension

    ERIC Educational Resources Information Center

    Shapiro, Edward S.; Gebhardt, Sarah; Flatley, Katie; Guard, Kirra B.; Fu, Qiong; Leichman, Erin S.; Calhoon, Mary Beth; Hojnoski, Robin

    2017-01-01

    The development and psychometric qualities of a measure using teacher judgment to rate performance in reading comprehension for narrative text is described--the Rating Scales for Academic Skills-Reading Comprehension Narrative (RSAS-RCN). Sixty-five teachers from the third, fourth, and fifth grades of 8 elementary schools completed the measure on…

  14. Assessing Reading Comprehension in Adolescent Low Achievers: Subskills Identification and Task Specificity

    ERIC Educational Resources Information Center

    van Steensel, Roel; Oostdam, Ron; van Gelderen, Amos

    2013-01-01

    On the basis of a validation study of a new test for assessing low-achieving adolescents' reading comprehension skills--the SALT-reading--we analyzed two issues relevant to the field of reading test development. Using the test results of 200 seventh graders, we examined the possibility of identifying reading comprehension subskills and the effects…

  15. Validation of a Cognitive Diagnostic Model across Multiple Forms of a Reading Comprehension Assessment

    ERIC Educational Resources Information Center

    Clark, Amy K.

    2013-01-01

    The present study sought to fit a cognitive diagnostic model (CDM) across multiple forms of a passage-based reading comprehension assessment using the attribute hierarchy method. Previous research on CDMs for reading comprehension assessments served as a basis for the attributes in the hierarchy. The two attribute hierarchies were fit to data from…

  16. Development and Validation of an Online Dynamic Assessment for Raising Students' Comprehension of Science Text

    ERIC Educational Resources Information Center

    Wang, Jing-Ru; Chen, Shin-Feng

    2016-01-01

    This article reports on the development of an online dynamic approach for assessing and improving students' reading comprehension of science texts--the dynamic assessment for reading comprehension of science text (DARCST). The DARCST blended assessment and response-specific instruction into a holistic learning task for grades 5 and 6 students. The…

  17. Abbreviations: Their Effects on Comprehension of Classified Advertisements.

    ERIC Educational Resources Information Center

    Sokol, Kirstin R.

    Two experimental designs were used to test the hypothesis that abbreviations in classified advertisements decrease the reader's comprehension of such ads. In the first experimental design, 73 high school students read four ads (for employment, used cars, apartments for rent, and articles for sale) either with abbreviations or with all…

  18. Effects of Note-Taking Training on Reading Comprehension and Recall

    ERIC Educational Resources Information Center

    Rahmani, Mina; Sadeghi, Karim

    2011-01-01

    The present study examined the process and product effects of note-taking strategy training on Iranian EFL learners' comprehension and retention of written material, with gender as a moderating variable. Intermediate undergraduate EFL learners (N = 108) were assigned to experimental and control groups. The Experimental (intervention) Group…

  19. Comprehensive Experimental and Computational Analysis of Binding Energy Hot Spots at the NF-κB Essential Modulator (NEMO)/IKKβ Protein-Protein Interface

    PubMed Central

    Golden, Mary S.; Cote, Shaun M.; Sayeg, Marianna; Zerbe, Brandon S.; Villar, Elizabeth A.; Beglov, Dmitri; Sazinsky, Stephen L.; Georgiadis, Rosina M.; Vajda, Sandor; Kozakov, Dima; Whitty, Adrian

    2013-01-01

    We report a comprehensive analysis of binding energy hot spots at the protein-protein interaction (PPI) interface between NF-κB Essential Modulator (NEMO) and IκB kinase subunit β (IKKβ), an interaction that is critical for NF-κB pathway signaling, using experimental alanine scanning mutagenesis and also the FTMap method for computational fragment screening. The experimental results confirm that the previously identified NBD region of IKKβ contains the highest concentration of hot spot residues, the strongest of which are W739, W741 and L742 (ΔΔG = 4.3, 3.5 and 3.2 kcal/mol, respectively). The region occupied by these residues defines a potentially druggable binding site on NEMO that extends for ~16 Å to additionally include the regions that bind IKKβ L737 and F734. NBD residues D738 and S740 are also important for binding but do not make direct contact with NEMO, instead likely acting to stabilize the active conformation of surrounding residues. We additionally found two previously unknown hot spot regions centered on IKKβ residues L708/V709 and L719/I723. The computational approach successfully identified all three hot spot regions on IKKβ. Moreover, the method was able to accurately quantify the energetic importance of all hot spots residues involving direct contact with NEMO. Our results provide new information to guide the discovery of small molecule inhibitors that target the NEMO/IKKβ interaction. They additionally clarify the structural and energetic complementarity between “pocket-forming” and “pocket occupying” hot spot residues, and further validate computational fragment mapping as a method for identifying hot spots at PPI interfaces. PMID:23506214

  20. The Effect of Screen Size on Mobile Phone User Comprehension of Health Information and Application Structure: An Experimental Approach.

    PubMed

    Al Ghamdi, Ebtisam; Yunus, Faisal; Da'ar, Omar; El-Metwally, Ashraf; Khalifa, Mohamed; Aldossari, Bakheet; Househ, Mowafa

    2016-01-01

    This research analyzes the impact of mobile phone screen size on user comprehension of health information and application structure. Applying experimental approach, we asked randomly selected users to read content and conduct tasks on a commonly used diabetes mobile application using three different mobile phone screen sizes. We timed and tracked a number of parameters, including correctness, effectiveness of completing tasks, content ease of reading, clarity of information organization, and comprehension. The impact of screen size on user comprehension/retention, clarity of information organization, and reading time were mixed. It is assumed on first glance that mobile screen size would affect all qualities of information reading and comprehension, including clarity of displayed information organization, reading time and user comprehension/retention of displayed information, but actually the screen size, in this experimental research, did not have significant impact on user comprehension/retention of the content or on understanding the application structure. However, it did have significant impact on clarity of information organization and reading time. Participants with larger screen size took shorter time reading the content with a significant difference in the ease of reading. While there was no significant difference in the comprehension of information or the application structures, there were a higher task completion rate and a lower number of errors with the bigger screen size. Screen size does not directly affect user comprehension of health information. However, it does affect clarity of information organization, reading time and user's ability to recall information.

  1. A new test for the assessment of working memory in clinical settings: Validation and norming of a month ordering task.

    PubMed

    Buekenhout, Imke; Leitão, José; Gomes, Ana A

    2018-05-24

    Month ordering tasks have been used in experimental settings to obtain measures of working memory (WM) capacity in older/clinical groups based solely on their face validity. We sought to assess the appropriateness of using a month ordering task in other contexts, including clinical settings, as a psychometrically sound WM assessment. To this end, we constructed a month ordering task (ucMOT), studied its reliability (internal consistency and temporal stability), and gathered construct-related and criterion-related validity evidence for its use as a WM assessment. The ucMOT proved to be internally consistent and temporally stable, and analyses of the criterion-related validity evidence revealed that its scores predicted the efficiency of language comprehension processes known to depend crucially on WM resources, namely, processes involved in pronoun interpretation. Furthermore, all ucMOT items discriminated between younger and older age groups; the global scores were significantly correlated with scores on well-established WM tasks and presented lower correlations with instruments that evaluate different (although related) processes, namely, inhibition and processing speed. We conclude that the ucMOT possesses solid psychometric properties. Accordingly, we acquired normative data for the Portuguese population, which we present as a regression-based algorithm that yields z scores adjusted for age, gender, and years of formal education. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. HIPdb: a database of experimentally validated HIV inhibiting peptides.

    PubMed

    Qureshi, Abid; Thakur, Nishant; Kumar, Manoj

    2013-01-01

    Besides antiretroviral drugs, peptides have also demonstrated potential to inhibit the Human immunodeficiency virus (HIV). For example, T20 has been discovered to effectively block the HIV entry and was approved by the FDA as a novel anti-HIV peptide (AHP). We have collated all experimental information on AHPs at a single platform. HIPdb is a manually curated database of experimentally verified HIV inhibiting peptides targeting various steps or proteins involved in the life cycle of HIV e.g. fusion, integration, reverse transcription etc. This database provides experimental information of 981 peptides. These are of varying length obtained from natural as well as synthetic sources and tested on different cell lines. Important fields included are peptide sequence, length, source, target, cell line, inhibition/IC(50), assay and reference. The database provides user friendly browse, search, sort and filter options. It also contains useful services like BLAST and 'Map' for alignment with user provided sequences. In addition, predicted structure and physicochemical properties of the peptides are also included. HIPdb database is freely available at http://crdd.osdd.net/servers/hipdb. Comprehensive information of this database will be helpful in selecting/designing effective anti-HIV peptides. Thus it may prove a useful resource to researchers for peptide based therapeutics development.

  3. Thermal Management Using Pulsating Jet Cooling Technology

    NASA Astrophysics Data System (ADS)

    Alimohammadi, S.; Dinneen, P.; Persoons, T.; Murray, D. B.

    2014-07-01

    The existing methods of heat removal from compact electronic devises are known to be deficient as the evolving technology demands more power density and accordingly better cooling techniques. Impinging jets can be used as a satisfactory method for thermal management of electronic devices with limited space and volume. Pulsating flows can produce an additional enhancement in heat transfer rate compared to steady flows. This article is part of a comprehensive experimental and numerical study performed on pulsating jet cooling technology. The experimental approach explores heat transfer performance of a pulsating air jet impinging onto a flat surface for nozzle-to-surface distances 1 <= H/D <= 6, Reynolds numbers 1,300 <= Re <= 2,800 pulsation frequency 2Hz <= f <= 65Hz, and Strouhal number 0.0012 <= Sr = fD/Um <= 0.084. The time-resolved velocity at the nozzle exit is measured to quantify the turbulence intensity profile. The numerical methodology is firstly validated using the experimental local Nusselt number distribution for the steady jet with the same geometry and boundary conditions. For a time-averaged Reynolds number of 6,000, the heat transfer enhancement using the pulsating jet for 9Hz <= f <= 55Hz and 0.017 <= Sr <= 0.102 and 1 <= H/D <= 6 are calculated. For the same range of Sr number, the numerical and experimental methods show consistent results.

  4. Accurate prediction of secondary metabolite gene clusters in filamentous fungi.

    PubMed

    Andersen, Mikael R; Nielsen, Jakob B; Klitgaard, Andreas; Petersen, Lene M; Zachariasen, Mia; Hansen, Tilde J; Blicher, Lene H; Gotfredsen, Charlotte H; Larsen, Thomas O; Nielsen, Kristian F; Mortensen, Uffe H

    2013-01-02

    Biosynthetic pathways of secondary metabolites from fungi are currently subject to an intense effort to elucidate the genetic basis for these compounds due to their large potential within pharmaceutics and synthetic biochemistry. The preferred method is methodical gene deletions to identify supporting enzymes for key synthases one cluster at a time. In this study, we design and apply a DNA expression array for Aspergillus nidulans in combination with legacy data to form a comprehensive gene expression compendium. We apply a guilt-by-association-based analysis to predict the extent of the biosynthetic clusters for the 58 synthases active in our set of experimental conditions. A comparison with legacy data shows the method to be accurate in 13 of 16 known clusters and nearly accurate for the remaining 3 clusters. Furthermore, we apply a data clustering approach, which identifies cross-chemistry between physically separate gene clusters (superclusters), and validate this both with legacy data and experimentally by prediction and verification of a supercluster consisting of the synthase AN1242 and the prenyltransferase AN11080, as well as identification of the product compound nidulanin A. We have used A. nidulans for our method development and validation due to the wealth of available biochemical data, but the method can be applied to any fungus with a sequenced and assembled genome, thus supporting further secondary metabolite pathway elucidation in the fungal kingdom.

  5. Early Results from the Advanced Radiation Protection Thick GCR Shielding Project

    NASA Technical Reports Server (NTRS)

    Norman, Ryan B.; Clowdsley, Martha; Slaba, Tony; Heilbronn, Lawrence; Zeitlin, Cary; Kenny, Sean; Crespo, Luis; Giesy, Daniel; Warner, James; McGirl, Natalie; hide

    2017-01-01

    The Advanced Radiation Protection Thick Galactic Cosmic Ray (GCR) Shielding Project leverages experimental and modeling approaches to validate a predicted minimum in the radiation exposure versus shielding depth curve. Preliminary results of space radiation models indicate that a minimum in the dose equivalent versus aluminum shielding thickness may exist in the 20-30 g/cm2 region. For greater shield thickness, dose equivalent increases due to secondary neutron and light particle production. This result goes against the long held belief in the space radiation shielding community that increasing shielding thickness will decrease risk to crew health. A comprehensive modeling effort was undertaken to verify the preliminary modeling results using multiple Monte Carlo and deterministic space radiation transport codes. These results verified the preliminary findings of a minimum and helped drive the design of the experimental component of the project. In first-of-their-kind experiments performed at the NASA Space Radiation Laboratory, neutrons and light ions were measured between large thicknesses of aluminum shielding. Both an upstream and a downstream shield were incorporated into the experiment to represent the radiation environment inside a spacecraft. These measurements are used to validate the Monte Carlo codes and derive uncertainty distributions for exposure estimates behind thick shielding similar to that provided by spacecraft on a Mars mission. Preliminary results for all aspects of the project will be presented.

  6. Validation of nonlinear gyrokinetic simulations of L- and I-mode plasmas on Alcator C-Mod

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Creely, A. J.; Howard, N. T.; Rodriguez-Fernandez, P.

    New validation of global, nonlinear, ion-scale gyrokinetic simulations (GYRO) is carried out for L- and I-mode plasmas on Alcator C-Mod, utilizing heat fluxes, profile stiffness, and temperature fluctuations. Previous work at C-Mod found that ITG/TEM-scale GYRO simulations can match both electron and ion heat fluxes within error bars in I-mode [White PoP 2015], suggesting that multi-scale (cross-scale coupling) effects [Howard PoP 2016] may be less important in I-mode than in L-mode. New results presented here, however, show that global, nonlinear, ion-scale GYRO simulations are able to match the experimental ion heat flux, but underpredict electron heat flux (at most radii),more » electron temperature fluctuations, and perturbative thermal diffusivity in both L- and I-mode. Linear addition of electron heat flux from electron scale runs does not resolve this discrepancy. These results indicate that single-scale simulations do not sufficiently describe the I-mode core transport, and that multi-scale (coupled electron- and ion-scale) transport models are needed. In conclusion a preliminary investigation with multi-scale TGLF, however, was unable to resolve the discrepancy between ion-scale GYRO and experimental electron heat fluxes and perturbative diffusivity, motivating further work with multi-scale GYRO simulations and a more comprehensive study with multi-scale TGLF.« less

  7. Validation of nonlinear gyrokinetic simulations of L- and I-mode plasmas on Alcator C-Mod

    DOE PAGES

    Creely, A. J.; Howard, N. T.; Rodriguez-Fernandez, P.; ...

    2017-03-02

    New validation of global, nonlinear, ion-scale gyrokinetic simulations (GYRO) is carried out for L- and I-mode plasmas on Alcator C-Mod, utilizing heat fluxes, profile stiffness, and temperature fluctuations. Previous work at C-Mod found that ITG/TEM-scale GYRO simulations can match both electron and ion heat fluxes within error bars in I-mode [White PoP 2015], suggesting that multi-scale (cross-scale coupling) effects [Howard PoP 2016] may be less important in I-mode than in L-mode. New results presented here, however, show that global, nonlinear, ion-scale GYRO simulations are able to match the experimental ion heat flux, but underpredict electron heat flux (at most radii),more » electron temperature fluctuations, and perturbative thermal diffusivity in both L- and I-mode. Linear addition of electron heat flux from electron scale runs does not resolve this discrepancy. These results indicate that single-scale simulations do not sufficiently describe the I-mode core transport, and that multi-scale (coupled electron- and ion-scale) transport models are needed. In conclusion a preliminary investigation with multi-scale TGLF, however, was unable to resolve the discrepancy between ion-scale GYRO and experimental electron heat fluxes and perturbative diffusivity, motivating further work with multi-scale GYRO simulations and a more comprehensive study with multi-scale TGLF.« less

  8. Effect of porosity and tortuosity of electrodes on carbon polymer soft actuators

    NASA Astrophysics Data System (ADS)

    S, Sunjai Nakshatharan; Punning, Andres; Johanson, Urmas; Aabloo, Alvo

    2018-01-01

    This work presents an electro-mechanical model and simulation of ionic electroactive polymer soft actuators with a porous carbon electrode, polymer membrane, and ionic liquid electrolyte. An attempt is made to understand the effects of specific properties of the porous electrodes such as porosity and tortuosity on the charge dynamics and mechanical performance of the actuator. The model uses porous electrode theory to study the electrochemical response of the system. The mechanical response of the whole laminate is attributed to the evolution of local stresses caused by diffusion of ions (diffusion-induced stresses or chemical stresses). The model indicates that in actuators with porous electrode, the diffusion coefficient of ions, conductivity of the electrodes, and ionic conductivity in both electrodes and separator are altered significantly. In addition, the model leads to an obvious deduction that the ions that are highly active in terms of mobility will dominate the whole system in terms of resulting mechanical deformation direction and rate of deformation. Finally, to validate the model, simulations are conducted using the finite element method, and the outcomes are compared with the experimental data. Significant effort has been put forward to experimentally measure the key parameters essential for the validation of the model. The results show that the model developed is able to well predict the behavior of the actuator, providing a comprehensive understanding of charge dynamics in ionic polymer actuator with porous electrodes.

  9. Probing the genome-scale metabolic landscape of Bordetella pertussis, the causative agent of whooping cough.

    PubMed

    Branco Dos Santos, Filipe; Olivier, Brett G; Boele, Joost; Smessaert, Vincent; De Rop, Philippe; Krumpochova, Petra; Klau, Gunnar W; Giera, Martin; Dehottay, Philippe; Teusink, Bas; Goffin, Philippe

    2017-08-25

    Whooping cough is a highly-contagious respiratory disease caused by Bordetella pertussi s. Despite vaccination, its incidence has been rising alarmingly, and yet, the physiology of B. pertussis remains poorly understood. We combined genome-scale metabolic reconstruction, a novel optimization algorithm and experimental data to probe the full metabolic potential of this pathogen, using strain Tohama I as a reference. Experimental validation showed that B. pertussis secretes a significant proportion of nitrogen as arginine and purine nucleosides, which may contribute to modulation of the host response. We also found that B. pertussis can be unexpectedly versatile, being able to metabolize many compounds while displaying minimal nutrient requirements. It can grow without cysteine - using inorganic sulfur sources such as thiosulfate - and it can grow on organic acids such as citrate or lactate as sole carbon sources, providing in vivo demonstration that its TCA cycle is functional. Although the metabolic reconstruction of eight additional strains indicates that the structural genes underlying this metabolic flexibility are widespread, experimental validation suggests a role of strain-specific regulatory mechanisms in shaping metabolic capabilities. Among five alternative strains tested, three were shown to grow on substrate combinations requiring a functional TCA cycle, but only one could use thiosulfate. Finally, the metabolic model was used to rationally design growth media with over two-fold improvements in pertussis toxin production. This study thus provides novel insights into B. pertussis physiology, and highlights the potential, but also limitations of models solely based on metabolic gene content. IMPORTANCE The metabolic capabilities of Bordetella pertussis - the causative agent of whooping cough - were investigated from a systems-level perspective. We constructed a comprehensive genome-scale metabolic model for B. pertussis , and challenged its predictions experimentally. This systems approach shed light on new potential host-microbe interactions, and allowed to rationally design novel growth media with over two-fold improvements in pertussis toxin production. Most importantly, we also uncovered the potential for metabolic flexibility of B. pertussis (significantly larger range of substrates than previously alleged; novel active pathways allowing growth in minimal, nearly mineral nutrient combinations where only the carbon source must be organic), although our results also highlight the importance of strain-specific regulatory determinants in shaping metabolic capabilities. Deciphering the underlying regulatory mechanisms appears crucial for a comprehensive understanding of B. pertussis 's lifestyle and the epidemiology of whooping cough. The contribution of metabolic models in this context will require the extension of the genome-scale metabolic model to integrate this regulatory dimension. Copyright © 2017 Branco dos Santos et al.

  10. Probing the Genome-Scale Metabolic Landscape of Bordetella pertussis, the Causative Agent of Whooping Cough

    PubMed Central

    Olivier, Brett G.; Boele, Joost; Smessaert, Vincent; De Rop, Philippe; Krumpochova, Petra; Klau, Gunnar W.; Giera, Martin; Dehottay, Philippe; Goffin, Philippe

    2017-01-01

    ABSTRACT Whooping cough is a highly contagious respiratory disease caused by Bordetella pertussis. Despite widespread vaccination, its incidence has been rising alarmingly, and yet, the physiology of B. pertussis remains poorly understood. We combined genome-scale metabolic reconstruction, a novel optimization algorithm, and experimental data to probe the full metabolic potential of this pathogen, using B. pertussis strain Tohama I as a reference. Experimental validation showed that B. pertussis secretes a significant proportion of nitrogen as arginine and purine nucleosides, which may contribute to modulation of the host response. We also found that B. pertussis can be unexpectedly versatile, being able to metabolize many compounds while displaying minimal nutrient requirements. It can grow without cysteine, using inorganic sulfur sources, such as thiosulfate, and it can grow on organic acids, such as citrate or lactate, as sole carbon sources, providing in vivo demonstration that its tricarboxylic acid (TCA) cycle is functional. Although the metabolic reconstruction of eight additional strains indicates that the structural genes underlying this metabolic flexibility are widespread, experimental validation suggests a role of strain-specific regulatory mechanisms in shaping metabolic capabilities. Among five alternative strains tested, three strains were shown to grow on substrate combinations requiring a functional TCA cycle, but only one strain could use thiosulfate. Finally, the metabolic model was used to rationally design growth media with >2-fold improvements in pertussis toxin production. This study thus provides novel insights into B. pertussis physiology and highlights the potential, but also the limitations, of models based solely on metabolic gene content. IMPORTANCE The metabolic capabilities of Bordetella pertussis, the causative agent of whooping cough, were investigated from a systems-level perspective. We constructed a comprehensive genome-scale metabolic model for B. pertussis and challenged its predictions experimentally. This systems approach shed light on new potential host-microbe interactions and allowed us to rationally design novel growth media with >2-fold improvements in pertussis toxin production. Most importantly, we also uncovered the potential for metabolic flexibility of B. pertussis (significantly larger range of substrates than previously alleged; novel active pathways allowing growth in minimal, nearly mineral nutrient combinations where only the carbon source must be organic), although our results also highlight the importance of strain-specific regulatory determinants in shaping metabolic capabilities. Deciphering the underlying regulatory mechanisms appears to be crucial for a comprehensive understanding of B. pertussis's lifestyle and the epidemiology of whooping cough. The contribution of metabolic models in this context will require the extension of the genome-scale metabolic model to integrate this regulatory dimension. PMID:28842544

  11. Validation of the comprehensive feeding practices questionnaire in parents of preschool children in Brazil.

    PubMed

    Warkentin, Sarah; Mais, Laís Amaral; Latorre, Maria do Rosário Dias de Oliveira; Carnell, Susan; Taddei, José Augusto de Aguiar Carrazedo

    2016-07-19

    Recent national surveys in Brazil have demonstrated a decrease in the consumption of traditional food and a parallel increase in the consumption of ultra-processed food, which has contributed to a rise in obesity prevalence in all age groups. Environmental factors, especially familial factors, have a strong influence on the food intake of preschool children, and this has led to the development of psychometric scales to measure parents' feeding practices. The aim of this study was to test the validity of a translated and adapted Comprehensive Feeding Practices Questionnaire in a sample of Brazilian preschool-aged children enrolled in private schools. A transcultural adaptation process was performed in order to develop a modified questionnaire (43 items). After piloting, the questionnaire was sent to parents, along with additional questions about family characteristics. Test-retest reliability was assessed in one of the schools. Factor analysis with oblique rotation was performed. Internal reliability was tested using Cronbach's alpha and correlations between factors, discriminant validity using marker variables of child's food intake, and convergent validity via correlations with parental perceptions of perceived responsibility for feeding and concern about the child's weight were also performed. The final sample consisted of 402 preschool children. Factor analysis resulted in a final questionnaire of 43 items distributed over 6 factors. Cronbach alpha values were adequate (0.74 to 0.88), between-factor correlations were low, and discriminant validity and convergent validity were acceptable. The modified CFPQ demonstrated significant internal reliability in this urban Brazilian sample. Scale validation within different cultures is essential for a more comprehensive understanding of parental feeding practices for preschoolers.

  12. Modeling mania in preclinical settings: a comprehensive review

    PubMed Central

    Sharma, Ajaykumar N.; Fries, Gabriel R.; Galvez, Juan F.; Valvassori, Samira S.; Soares, Jair C.; Carvalho, André F.; Quevedo, Joao

    2015-01-01

    The current pathophysiological understanding of mechanisms leading to onset and progression of bipolar manic episodes remains limited. At the same time, available animal models for mania have limited face, construct, and predictive validities. Additionally, these models fail to encompass recent pathophysiological frameworks of bipolar disorder (BD), e.g. neuroprogression. Therefore, there is a need to search for novel preclinical models for mania that could comprehensively address these limitations. Herein we review the history, validity, and caveats of currently available animal models for mania. We also review new genetic models for mania, namely knockout mice for genes involved in neurotransmission, synapse formation, and intracellular signaling pathways. Furthermore, we review recent trends in preclinical models for mania that may aid in the comprehension of mechanisms underlying the neuroprogressive and recurring nature of BD. In conclusion, the validity of animal models for mania remains limited. Nevertheless, novel (e.g. genetic) animal models as well as adaptation of existing paradigms hold promise. PMID:26545487

  13. Reliability and validity of the Computerized Comprehension Task (CCT): data from American English and Mexican Spanish infants*

    PubMed Central

    FRIEND, MARGARET; KEPLINGER, MELANIE

    2017-01-01

    Early language comprehension may be one of the most important predictors of developmental risk. The need for performance-based assessment is predicated on limitations identified in the exclusive use of parent report and on the need for a performance measure with which to assess the convergent validity of parent report of comprehension. Child performance data require the development of procedures to facilitate infant attention and compliance. Forty infants (20 at 1;4 and 20 at 1;8) acquiring English completed a standard picture book task and the same task was administered on a touch-sensitive screen. The computerized task significantly improved task attention, compliance and performance. Reliability was high, indicating that infants were not responding randomly. Convergent validity with parent report and 4-month stability was substantial. Preliminary data extending this approach to Mexican-Spanish are presented. Results are discussed in terms of the promise of this technique for clinical and research settings and the potential influences of cultural factors on performance. PMID:18300430

  14. Initial Teacher Licensure Testing in Tennessee: Test Validation.

    ERIC Educational Resources Information Center

    Bowman, Harry L.; Petry, John R.

    In 1988 a study was conducted to determine the validity of candidate teacher licensure examinations for use in Tennessee under the 1984 Comprehensive Education Reform Act. The Department of Education conducted a study to determine the validity of 11 previously unvalidated or extensively revised tests for certification and to make recommendations…

  15. Validation of Student and Parent Reported Data on the Basic Grant Application Form, 1978-79 Comprehensive Validation Guide. Procedural Manual for: Validation of Cases Referred by Institutions; Validation of Cases Referred by the Office of Education; Recovery of Overpayments.

    ERIC Educational Resources Information Center

    Smith, Karen; And Others

    Procedures for validating data reported by students and parents on an application for Basic Educational Opportunity Grants were developed in 1978 for the U.S. Office of Education (OE). Validation activities include: validation of flagged Student Eligibility Reports (SERs) for students whose schools are part of the Alternate Disbursement System;…

  16. Virtual laboratories: new opportunities for collaborative water science

    NASA Astrophysics Data System (ADS)

    Ceola, Serena; Arheimer, Berit; Bloeschl, Guenter; Baratti, Emanuele; Capell, Rene; Castellarin, Attilio; Freer, Jim; Han, Dawei; Hrachowitz, Markus; Hundecha, Yeshewatesfa; Hutton, Christopher; Lindström, Goran; Montanari, Alberto; Nijzink, Remko; Parajka, Juraj; Toth, Elena; Viglione, Alberto; Wagener, Thorsten

    2015-04-01

    Reproducibility and repeatability of experiments are the fundamental prerequisites that allow researchers to validate results and share hydrological knowledge, experience and expertise in the light of global water management problems. Virtual laboratories offer new opportunities to enable these prerequisites since they allow experimenters to share data, tools and pre-defined experimental procedures (i.e. protocols). Here we present the outcomes of a first collaborative numerical experiment undertaken by five different international research groups in a virtual laboratory to address the key issues of reproducibility and repeatability. Moving from the definition of accurate and detailed experimental protocols, a rainfall-runoff model was independently applied to 15 European catchments by the research groups and model results were collectively examined through a web-based discussion. We found that a detailed modelling protocol was crucial to ensure the comparability and reproducibility of the proposed experiment across groups. Our results suggest that sharing comprehensive and precise protocols and running the experiments within a controlled environment (e.g. virtual laboratory) is as fundamental as sharing data and tools for ensuring experiment repeatability and reproducibility across the broad scientific community and thus advancing hydrology in a more coherent way.

  17. Exploring Valid Reference Genes for Quantitative Real-time PCR Analysis in Plutella xylostella (Lepidoptera: Plutellidae)

    PubMed Central

    Fu, Wei; Xie, Wen; Zhang, Zhuo; Wang, Shaoli; Wu, Qingjun; Liu, Yong; Zhou, Xiaomao; Zhou, Xuguo; Zhang, Youjun

    2013-01-01

    Abstract: Quantitative real-time PCR (qRT-PCR), a primary tool in gene expression analysis, requires an appropriate normalization strategy to control for variation among samples. The best option is to compare the mRNA level of a target gene with that of reference gene(s) whose expression level is stable across various experimental conditions. In this study, expression profiles of eight candidate reference genes from the diamondback moth, Plutella xylostella, were evaluated under diverse experimental conditions. RefFinder, a web-based analysis tool, integrates four major computational programs including geNorm, Normfinder, BestKeeper, and the comparative ΔCt method to comprehensively rank the tested candidate genes. Elongation factor 1 (EF1) was the most suited reference gene for the biotic factors (development stage, tissue, and strain). In contrast, although appropriate reference gene(s) do exist for several abiotic factors (temperature, photoperiod, insecticide, and mechanical injury), we were not able to identify a single universal reference gene. Nevertheless, a suite of candidate reference genes were specifically recommended for selected experimental conditions. Our finding is the first step toward establishing a standardized qRT-PCR analysis of this agriculturally important insect pest. PMID:23983612

  18. The Effects of Advance Organizers and Subtitles on EFL Learners' Listening Comprehension Skills

    ERIC Educational Resources Information Center

    Yang, Hui-Yu

    2014-01-01

    The present research reports the findings of three experiments which explore how subtitles and advance organizers affect EFL learners' listening comprehension of authentic videos. EFL learners are randomly assigned to one of two groups. The control group receives no treatment and the experimental group receives the experimental conditions of one…

  19. Strategies Training in the Teaching of Reading Comprehension for EFL Learners in Indonesia

    ERIC Educational Resources Information Center

    Mistar, Junaidi; Zuhairi, Alfan; Yanti, Nofita

    2016-01-01

    This study investigated the effect of reading strategies training on the students' literal and inferential reading comprehension. The training involved three concrete strategies: predicting, text mapping, and summarizing. To achieve the purpose of this study, a quasi experimental design was selected with the experimental group being given reading…

  20. The Effectiveness of the Barton's Intervention Program on Reading Comprehension and Reading Attitude of Students with Dyslexia.

    PubMed

    Mihandoost, Zeinab; Elias, Habibah

    2011-01-01

    The current research tested the differences in reading attitude and reading comprehension in the dyslexic students between the control group and the experimental group following the Barton intervention program. Dyslexia screening instrument and reading text were employed in order to identify dyslexic students. The population of the study included 138 dyslexic students studying in schools in Ilam, Iran. From this population, 64 students were randomly selected and assigned to an experimental group as well as a control group. The experimental group was taught for 36 sessions, using the Barton's method at two levels, and ten lessons were provided to improve the reading skill. The reading comprehension and reading attitude instruments were employed for the measurement of the attitude and comprehension before and after the intervention program. The analysis of covariance showed a significant difference between the control group and the experimental group following the Barton intervention program. This study showed that dyslexic students learned to read, and a more direct instruction related to decoding could influence their progress more than the general exposure to education.

  1. Validation Results for Core-Scale Oil Shale Pyrolysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Staten, Josh; Tiwari, Pankaj

    2015-03-01

    This report summarizes a study of oil shale pyrolysis at various scales and the subsequent development a model for in situ production of oil from oil shale. Oil shale from the Mahogany zone of the Green River formation was used in all experiments. Pyrolysis experiments were conducted at four scales, powdered samples (100 mesh) and core samples of 0.75”, 1” and 2.5” diameters. The batch, semibatch and continuous flow pyrolysis experiments were designed to study the effect of temperature (300°C to 500°C), heating rate (1°C/min to 10°C/min), pressure (ambient and 500 psig) and size of the sample on product formation.more » Comprehensive analyses were performed on reactants and products - liquid, gas and spent shale. These experimental studies were designed to understand the relevant coupled phenomena (reaction kinetics, heat transfer, mass transfer, thermodynamics) at multiple scales. A model for oil shale pyrolysis was developed in the COMSOL multiphysics platform. A general kinetic model was integrated with important physical and chemical phenomena that occur during pyrolysis. The secondary reactions of coking and cracking in the product phase were addressed. The multiscale experimental data generated and the models developed provide an understanding of the simultaneous effects of chemical kinetics, and heat and mass transfer on oil quality and yield. The comprehensive data collected in this study will help advance the move to large-scale in situ oil production from the pyrolysis of oil shale.« less

  2. Designing and defining dynamic protein cage nanoassemblies in solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Y. -T.; Hura, G. L.; Dyer, K. N.

    Central challenges in the design of large and dynamic macromolecular assemblies for synthetic biology lie in developing effective methods for testing design strategies and their outcomes, including comprehensive assessments of solution behavior. Here, we created and validated an advanced design of a 600-kDa protein homododecamer that self-assembles into a symmetric tetrahedral cage. The monomeric unit is composed of a trimerizing apex-forming domain genetically linked to an edge-forming dimerizing domain. Enhancing the crystallographic results, high-throughput small-angle x-ray scattering (SAXS) comprehensively contrasted our modifications under diverse solution conditions. To generate a phase diagram associating structure and assembly, we developed force plots thatmore » measure dissimilarity among multiple SAXS data sets. These new tools, which provided effective feedback on experimental constructs relative to design, have general applicability in analyzing the solution behavior of heterogeneous nanosystems and have been made available as a web-based application. Specifically, our results probed the influence of solution conditions and symmetry on stability and structural adaptability, identifying the dimeric interface as the weak point in the assembly. Force plots comparing SAXS data sets further reveal more complex and controllable behavior in solution than captured by our crystal structures. Lastly, these methods for objectively and comprehensively comparing SAXS profiles for systems critically affected by solvent conditions and structural heterogeneity provide an enabling technology for advancing the design and bioengineering of nanoscale biological materials.« less

  3. Designing and defining dynamic protein cage nanoassemblies in solution

    DOE PAGES

    Lai, Y. -T.; Hura, G. L.; Dyer, K. N.; ...

    2016-12-14

    Central challenges in the design of large and dynamic macromolecular assemblies for synthetic biology lie in developing effective methods for testing design strategies and their outcomes, including comprehensive assessments of solution behavior. Here, we created and validated an advanced design of a 600-kDa protein homododecamer that self-assembles into a symmetric tetrahedral cage. The monomeric unit is composed of a trimerizing apex-forming domain genetically linked to an edge-forming dimerizing domain. Enhancing the crystallographic results, high-throughput small-angle x-ray scattering (SAXS) comprehensively contrasted our modifications under diverse solution conditions. To generate a phase diagram associating structure and assembly, we developed force plots thatmore » measure dissimilarity among multiple SAXS data sets. These new tools, which provided effective feedback on experimental constructs relative to design, have general applicability in analyzing the solution behavior of heterogeneous nanosystems and have been made available as a web-based application. Specifically, our results probed the influence of solution conditions and symmetry on stability and structural adaptability, identifying the dimeric interface as the weak point in the assembly. Force plots comparing SAXS data sets further reveal more complex and controllable behavior in solution than captured by our crystal structures. Lastly, these methods for objectively and comprehensively comparing SAXS profiles for systems critically affected by solvent conditions and structural heterogeneity provide an enabling technology for advancing the design and bioengineering of nanoscale biological materials.« less

  4. Analysis of a rotating spool expander for Organic Rankine Cycle applications

    NASA Astrophysics Data System (ADS)

    Krishna, Abhinav

    Increasing interest in recovering or utilizing low-grade heat for power generation has prompted a search for ways in which the power conversion process may be enhanced. Amongst the conversion systems, the Organic Rankine Cycle (ORC) has generated an enormous amount of interest amongst researchers and system designers. Nevertheless, component level technologies need to be developed and match the range of potential applications. In particular, technical challenges associated with scaling expansion machines (turbines) from utility scale to commercial scale have prevented widespread adoption of the technology. In this regard, this work focuses on a novel rotating spool expansion machine at the heart of an Organic Rankine Cycle. A comprehensive, deterministic simulation model of the rotating spool expander is developed. The comprehensive model includes a detailed geometry model of the spool expander and the suction valve mechanism. Sub-models for mass flow, leakage, heat transfer and friction within the expander are also developed. Apart from providing the ability to characterize the expander in a particular system, the model provides a valuable tool to study the impact of various design variables on the performance of the machine. The investigative approach also involved an experimental program to assess the performance of a working prototype. In general, the experimental data showed that the expander performance was sub-par, largely due to the mismatch of prevailing operating conditions and the expander design criteria. Operating challenges during the shakedown tests and subsequent sub-optimal design changes also detracted from performance. Nevertheless, the results of the experimental program were sufficient for a proof-of-concept assessment of the expander and for model validation over a wide range of operating conditions. The results of the validated model reveal several interesting details concerning the expander design and performance. For example, the match between the design expansion ratio and the system imposed pressure ratio has a large influence on the performance of the expander. Further exploration shows that from an operating perspective, under-expansion is preferable to over-expansion. The model is also able to provide insight on the dominant leakage paths in the expander and points to the fact that this is the primary loss mechanism in the current expander. Similar insights are obtained from assessing the sensitivity of various other design variables on expander performance. Based on the understanding provided by the sensitivity analysis, exercising the validated model showed that expander efficiencies on the order of 75% are imminently possible in an improved design. Therefore, with sufficient future development, adoption of the spool expander in ORC systems that span a 50 kW -- 200 kW range is broadly feasible.

  5. MOSAIC : Model Of Sustainability And Integrated Corridors, phase 3 : comprehensive model calibration and validation and additional model enhancement.

    DOT National Transportation Integrated Search

    2015-02-01

    The Maryland State Highway Administration (SHA) has initiated major planning efforts to improve transportation : efficiency, safety, and sustainability on critical highway corridors through its Comprehensive Highway Corridor : (CHC) program. This pro...

  6. Competing Activation during Fantasy Text Comprehension

    ERIC Educational Resources Information Center

    Creer, Sarah D.; Cook, Anne E.; O'Brien, Edward J.

    2018-01-01

    During comprehension, readers' general world knowledge and contextual information compete for influence during integration and validation. Fantasy narratives, in which general world knowledge often conflicts with fantastical events, provide a setting to examine this competition. Experiment 1 showed that with sufficient elaboration, contextual…

  7. Reading Comprehension Improvement with Individualized Cognitive Profiles and Metacognition

    ERIC Educational Resources Information Center

    Allen, Kathleen D.; Hancock, Thomas E.

    2008-01-01

    This study models improving classroom reading instruction through valid assessment and individualized metacomprehension. Individualized cognitive profiles of Woodcock-Johnson III cognitive abilities correlated with reading comprehension were used during classroom independent reading for judgments of learning, feedback, self-reflection, and…

  8. [The development and effects of a comprehensive communication course for nursing students].

    PubMed

    Kim, Sunah; Park, Jung-Hwa; Lee, Hyun-Hwa

    2004-06-01

    The purposes of this study were to: (a) develop a comprehensive communication course combined with a group program for improving communication skills; and (b) examine the effects of the comprehensive communication course on interpersonal communication, relationship change, self-esteem, and depression in nursing students. The experimental group consisted of 82 nursing students, and the control group, 108 nursing students. Both groups each took communication courses from March to June, 2002 and 2003. A group program for improving communication skills was conducted for each 8 subgroups of the experimental group for 90 minutes once a week during the 6 weeks, while the existing communication lecture was conducted for the control group. Both groups were post-tested after the intervention for verifying the difference of variables between the two groups, and the experimental group was also pre-tested for verifying the difference between before and after the treatment. Interpersonal communication score of the post-test in the experimental group was significantly higher than in the control group and the depression score of the post-test in the experimental group was significantly lower than in the control group. Interpersonal communication, relationship change and self-esteem scores were significantly increased and the depression score was significantly decreased in experimental group after the treatment. In conclusion, the comprehensive communication course that was developed in this study had positive effects on communication skills in nursing students.

  9. Library of molecular associations: curating the complex molecular basis of liver diseases.

    PubMed

    Buchkremer, Stefan; Hendel, Jasmin; Krupp, Markus; Weinmann, Arndt; Schlamp, Kai; Maass, Thorsten; Staib, Frank; Galle, Peter R; Teufel, Andreas

    2010-03-20

    Systems biology approaches offer novel insights into the development of chronic liver diseases. Current genomic databases supporting systems biology analyses are mostly based on microarray data. Although these data often cover genome wide expression, the validity of single microarray experiments remains questionable. However, for systems biology approaches addressing the interactions of molecular networks comprehensive but also highly validated data are necessary. We have therefore generated the first comprehensive database for published molecular associations in human liver diseases. It is based on PubMed published abstracts and aimed to close the gap between genome wide coverage of low validity from microarray data and individual highly validated data from PubMed. After an initial text mining process, the extracted abstracts were all manually validated to confirm content and potential genetic associations and may therefore be highly trusted. All data were stored in a publicly available database, Library of Molecular Associations http://www.medicalgenomics.org/databases/loma/news, currently holding approximately 1260 confirmed molecular associations for chronic liver diseases such as HCC, CCC, liver fibrosis, NASH/fatty liver disease, AIH, PBC, and PSC. We furthermore transformed these data into a powerful resource for molecular liver research by connecting them to multiple biomedical information resources. Together, this database is the first available database providing a comprehensive view and analysis options for published molecular associations on multiple liver diseases.

  10. Validity and reliability of four language mapping paradigms.

    PubMed

    Wilson, Stephen M; Bautista, Alexa; Yen, Melodie; Lauderdale, Stefanie; Eriksson, Dana K

    2017-01-01

    Language areas of the brain can be mapped in individual participants with functional MRI. We investigated the validity and reliability of four language mapping paradigms that may be appropriate for individuals with acquired aphasia: sentence completion, picture naming, naturalistic comprehension, and narrative comprehension. Five neurologically normal older adults were scanned on each of the four paradigms on four separate occasions. Validity was assessed in terms of whether activation patterns reflected the known typical organization of language regions, that is, lateralization to the left hemisphere, and involvement of the left inferior frontal gyrus and the left middle and/or superior temporal gyri. Reliability (test-retest reproducibility) was quantified in terms of the Dice coefficient of similarity, which measures overlap of activations across time points. We explored the impact of different absolute and relative voxelwise thresholds, a range of cluster size cutoffs, and limitation of analyses to a priori potential language regions. We found that the narrative comprehension and sentence completion paradigms offered the best balance of validity and reliability. However, even with optimal combinations of analysis parameters, there were many scans on which known features of typical language organization were not demonstrated, and test-retest reproducibility was only moderate for realistic parameter choices. These limitations in terms of validity and reliability may constitute significant limitations for many clinical or research applications that depend on identifying language regions in individual participants.

  11. Genome-Wide Analysis of A-to-I RNA Editing.

    PubMed

    Savva, Yiannis A; Laurent, Georges St; Reenan, Robert A

    2016-01-01

    Adenosine (A)-to-inosine (I) RNA editing is a fundamental posttranscriptional modification that ensures the deamination of A-to-I in double-stranded (ds) RNA molecules. Intriguingly, the A-to-I RNA editing system is particularly active in the nervous system of higher eukaryotes, altering a plethora of noncoding and coding sequences. Abnormal RNA editing is highly associated with many neurological phenotypes and neurodevelopmental disorders. However, the molecular mechanisms underlying RNA editing-mediated pathogenesis still remain enigmatic and have attracted increasing attention from researchers. Over the last decade, methods available to perform genome-wide transcriptome analysis, have evolved rapidly. Within the RNA editing field researchers have adopted next-generation sequencing technologies to identify RNA-editing sites within genomes and to elucidate the underlying process. However, technical challenges associated with editing site discovery have hindered efforts to uncover comprehensive editing site datasets, resulting in the general perception that the collections of annotated editing sites represent only a small minority of the total number of sites in a given organism, tissue, or cell type of interest. Additionally to doubts about sensitivity, existing RNA-editing site lists often contain high percentages of false positives, leading to uncertainty about their validity and usefulness in downstream studies. An accurate investigation of A-to-I editing requires properly validated datasets of editing sites with demonstrated and transparent levels of sensitivity and specificity. Here, we describe a high signal-to-noise method for RNA-editing site detection using single-molecule sequencing (SMS). With this method, authentic RNA-editing sites may be differentiated from artifacts. Machine learning approaches provide a procedure to improve upon and experimentally validate sequencing outcomes through use of computationally predicted, iterative feedback loops. Subsequent use of extensive Sanger sequencing validations can generate accurate editing site lists. This approach has broad application and accurate genome-wide editing analysis of various tissues from clinical specimens or various experimental organisms is now a possibility.

  12. Quantitative detection of caffeine in human skin by confocal Raman spectroscopy--A systematic in vitro validation study.

    PubMed

    Franzen, Lutz; Anderski, Juliane; Windbergs, Maike

    2015-09-01

    For rational development and evaluation of dermal drug delivery, the knowledge of rate and extent of substance penetration into the human skin is essential. However, current analytical procedures are destructive, labor intense and lack a defined spatial resolution. In this context, confocal Raman microscopy bares the potential to overcome current limitations in drug depth profiling. Confocal Raman microscopy already proved its suitability for the acquisition of qualitative penetration profiles, but a comprehensive investigation regarding its suitability for quantitative measurements inside the human skin is still missing. In this work, we present a systematic validation study to deploy confocal Raman microscopy for quantitative drug depth profiling in human skin. After we validated our Raman microscopic setup, we successfully established an experimental procedure that allows correlating the Raman signal of a model drug with its controlled concentration in human skin. To overcome current drawbacks in drug depth profiling, we evaluated different modes of peak correlation for quantitative Raman measurements and offer a suitable operating procedure for quantitative drug depth profiling in human skin. In conclusion, we successfully demonstrate the potential of confocal Raman microscopy for quantitative drug depth profiling in human skin as valuable alternative to destructive state-of-the-art techniques. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Associating Animations with Concrete Models to Enhance Students' Comprehension of Different Visual Representations in Organic Chemistry

    ERIC Educational Resources Information Center

    Al-Balushi, Sulaiman M.; Al-Hajri, Sheikha H.

    2014-01-01

    The purpose of the current study is to explore the impact of associating animations with concrete models on eleventh-grade students' comprehension of different visual representations in organic chemistry. The study used a post-test control group quasi-experimental design. The experimental group (N = 28) used concrete models, submicroscopic…

  14. Morphological Instructional Packages as Determinants of Inferring Word Meanings in Reading Comprehension among Secondary School Students

    ERIC Educational Resources Information Center

    Akinwumi, Julius Olaitan; Olubunmi, Olagundoye Christanah

    2017-01-01

    This study investigated the effects of morphological instructional packages as determinants of inferring word meanings in reading comprehension among secondary school students in Ekiti State. The study adopted pre-test, post-test and control quasi-experimental research using two experimental groups and one control group with a sample of 270 Senior…

  15. Using Different Types of Dictionaries for Improving EFL Reading Comprehension and Vocabulary Learning

    ERIC Educational Resources Information Center

    Alharbi, Majed A.

    2016-01-01

    This study investigated the effects of monolingual book dictionaries, popup dictionaries, and type-in dictionaries on improving reading comprehension and vocabulary learning in an EFL program. An experimental design involving four groups and a post-test was chosen for the experiment: (1) pop-up dictionary (experimental group 1); (2) type-in…

  16. Exploring a Framework for Consequential Validity for Performance-Based Assessments

    ERIC Educational Resources Information Center

    Kim, Su Jung

    2017-01-01

    This study explores a new comprehensive framework for understanding elements of validity, specifically for performance assessments that are administered within specific and dynamic contexts. The adoption of edTPA is a good empirical case for examining the concept of consequential validity because this assessment has been implemented at the state…

  17. Examining the Validity of the Technological Pedagogical Content Knowledge (TPACK) Framework for Preservice Chemistry Teachers

    ERIC Educational Resources Information Center

    Deng, Feng; Chai, Ching Sing; So, Hyo-Jeong; Qian, Yangyi; Chen, Lingling

    2017-01-01

    While various quantitative measures for assessing teachers' technological pedagogical content knowledge (TPACK) have developed rapidly, few studies to date have comprehensively validated the structure of TPACK through various criteria of validity especially for content specific areas. In this paper, we examined how the TPACK survey measure is…

  18. A Validity and Reliability Update on the Informal Reading Inventory with Suggestions for Improvement.

    ERIC Educational Resources Information Center

    Klesius, Janell P.; Homan, Susan P.

    1985-01-01

    The article reviews validity and reliability studies on the informal reading inventory, a diagnostic instrument to identify reading grade-level placement and strengths and weaknesses in work recognition and comprehension. Gives suggestions to improve the validity and reliability of existing inventories and to evaluate them in newly published…

  19. Variety and Drift in the Functions and Purposes of Assessment in K-12 Education

    ERIC Educational Resources Information Center

    Ho, Andrew D.

    2014-01-01

    Background/Context: The target of assessment validation is not an assessment but the use of an assessment for a purpose. Although the validation literature often provides examples of assessment purposes, comprehensive reviews of these purposes are rare. Additionally, assessment purposes posed for validation are generally described as discrete and…

  20. The Validity of ITBS Reading Comprehension Test Scores for Learning Disabled and Non Learning Disabled Students under Extended-Time Conditions.

    ERIC Educational Resources Information Center

    Huesman, Ronald L., Jr.; Frisbie, David A.

    This study investigated the effect of extended-time limits in terms of performance levels and score comparability for reading comprehension scores on the Iowa Tests of Basic Skills (ITBS). The first part of the study compared the average reading comprehension scores on the ITBS of 61 sixth-graders with learning disabilities and 397 non learning…

  1. English Word-Level Decoding and Oral Language Factors as Predictors of Third and Fifth Grade English Language Learners' Reading Comprehension Performance

    ERIC Educational Resources Information Center

    Landon, Laura L.

    2017-01-01

    This study examines the application of the Simple View of Reading (SVR), a reading comprehension theory focusing on word recognition and linguistic comprehension, to English Language Learners' (ELLs') English reading development. This study examines the concurrent and predictive validity of two components of the SVR, oral language and word-level…

  2. Overview of the Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Chwalowski, Pawel; Schuster, David M.; Dalenbring, Mats

    2013-01-01

    The AIAA Aeroelastic Prediction Workshop (AePW) was held in April, 2012, bringing together communities of aeroelasticians and computational fluid dynamicists. The objective in conducting this workshop on aeroelastic prediction was to assess state-of-the-art computational aeroelasticity methods as practical tools for the prediction of static and dynamic aeroelastic phenomena. No comprehensive aeroelastic benchmarking validation standard currently exists, greatly hindering validation and state-of-the-art assessment objectives. The workshop was a step towards assessing the state of the art in computational aeroelasticity. This was an opportunity to discuss and evaluate the effectiveness of existing computer codes and modeling techniques for unsteady flow, and to identify computational and experimental areas needing additional research and development. Three configurations served as the basis for the workshop, providing different levels of geometric and flow field complexity. All cases considered involved supercritical airfoils at transonic conditions. The flow fields contained oscillating shocks and in some cases, regions of separation. The computational tools principally employed Reynolds-Averaged Navier Stokes solutions. The successes and failures of the computations and the experiments are examined in this paper.

  3. Modeling the Formation of Transverse Weld during Billet-on-Billet Extrusion

    PubMed Central

    Mahmoodkhani, Yahya; Wells, Mary; Parson, Nick; Jowett, Chris; Poole, Warren

    2014-01-01

    A comprehensive mathematical model of the hot extrusion process for aluminum alloys has been developed and validated. The plasticity module was developed using a commercial finite element package, DEFORM-2D, a transient Lagrangian model which couples the thermal and deformation phenomena. Validation of the model against industrial data indicated that it gave excellent predictions of the pressure during extrusion. The finite element predictions of the velocity fields were post-processed to calculate the thickness of the surface cladding as one billet is fed in after another through the die (i.e., the transverse weld). The mathematical model was then used to assess the effect a change in feeder dimensions would have on the shape, thickness and extent of the transverse weld during extrusion. Experimental measurements for different combinations of billet materials show that the model is able to accurately predict the transverse weld shape as well as the clad surface layer to thicknesses of 50 μm. The transverse weld is significantly affected by the feeder geometry shape, but the effects of ram speed, billet material and temperature on the transverse weld dimensions are negligible. PMID:28788629

  4. Simulation of the AC corona phenomenon with experimental validation

    NASA Astrophysics Data System (ADS)

    Villa, Andrea; Barbieri, Luca; Marco, Gondola; Malgesini, Roberto; Leon-Garzon, Andres R.

    2017-11-01

    The corona effect, and in particular the Trichel phenomenon, is an important aspect of plasma physics with many technical applications, such as pollution reduction, surface and medical treatments. This phenomenon is also associated with components used in the power industry where it is, in many cases, the source of electro-magnetic disturbance, noise and production of undesired chemically active species. Despite the power industry to date using mainly alternating current (AC) transmission, most of the studies related to the corona effect have been carried out with direct current (DC) sources. Therefore, there is technical interest in validating numerical codes capable of simulating the AC phenomenon. In this work we describe a set of partial differential equations that are comprehensive enough to reproduce the distinctive features of the corona in an AC regime. The model embeds some selectable chemical databases, comprising tens of chemical species and hundreds of reactions, the thermal dynamics of neutral species and photoionization. A large set of parameters—deduced from experiments and numerical estimations—are compared, to assess the effectiveness of the proposed approach.

  5. Integrated modeling analysis of a novel hexapod and its application in active surface

    NASA Astrophysics Data System (ADS)

    Yang, Dehua; Zago, Lorenzo; Li, Hui; Lambert, Gregory; Zhou, Guohua; Li, Guoping

    2011-09-01

    This paper presents the concept and integrated modeling analysis of a novel mechanism, a 3-CPS/RPPS hexapod, for supporting segmented reflectors for radio telescopes and eventually segmented mirrors of optical telescopes. The concept comprises a novel type of hexapod with an original organization of actuators hence degrees of freedom, based on a swaying arm based design concept. Afterwards, with specially designed connecting joints between panels/segments, an iso-static master-slave active surface concept can be achieved for any triangular and/or hexagonal panel/segment pattern. The integrated modeling comprises all the multifold sizing and performance aspects which must be evaluated concurrently in order to optimize and validate the design and the configuration. In particular, comprehensive investigation of kinematic behavior, dynamic analysis, wave-front error and sensitivity analysis are carried out, where, frequently used tools like MATLAB/SimMechanics, CALFEM and ANSYS are used. Especially, we introduce the finite element method as a competent approach for analyses of the multi-degree of freedom mechanism. Some experimental verifications already performed validating single aspects of the integrated concept are also presented with the results obtained.

  6. Design-based modeling of magnetically actuated soft diaphragm materials

    NASA Astrophysics Data System (ADS)

    Jayaneththi, V. R.; Aw, K. C.; McDaid, A. J.

    2018-04-01

    Magnetic polymer composites (MPC) have shown promise for emerging biomedical applications such as lab-on-a-chip and implantable drug delivery. These soft material actuators are capable of fast response, large deformation and wireless actuation. Existing MPC modeling approaches are computationally expensive and unsuitable for rapid design prototyping and real-time control applications. This paper proposes a macro-scale 1-DOF model capable of predicting force and displacement of an MPC diaphragm actuator. Model validation confirmed both blocked force and displacement can be accurately predicted in a variety of working conditions i.e. different magnetic field strengths, static/dynamic fields, and gap distances. The contribution of this work includes a comprehensive experimental investigation of a macro-scale diaphragm actuator; the derivation and validation of a new phenomenological model to describe MPC actuation; and insights into the proposed model’s design-based functionality i.e. scalability and generalizability in terms of magnetic filler concentration and diaphragm diameter. Due to the lumped element modeling approach, the proposed model can also be adapted to alternative actuator configurations, and thus presents a useful tool for design, control and simulation of novel MPC applications.

  7. Validation of hot-poured crack sealant performance-based guidelines.

    DOT National Transportation Integrated Search

    2017-06-01

    This report summarizes a comprehensive research effort to validate thresholds for performance-based guidelines and : grading system for hot-poured asphalt crack sealants. A series of performance tests were established in earlier research and : includ...

  8. Validation of the Comprehensive ICF Core Set for Vocational Rehabilitation From the Perspective of Physical Therapists: International Delphi Survey.

    PubMed

    Kaech Moll, Veronika M; Escorpizo, Reuben; Portmann Bergamaschi, Ruth; Finger, Monika E

    2016-08-01

    The Comprehensive ICF Core Set for vocational rehabilitation (VR) is a list of essential categories on functioning based on the World Health Organization (WHO) International Classification of Functioning, Disability and Health (ICF), which describes a standard for interdisciplinary assessment, documentation, and communication in VR. The aim of this study was to examine the content validity of the Comprehensive ICF Core Set for VR from the perspective of physical therapists. A 3-round email survey was performed using the Delphi method. A convenience sample of international physical therapists working in VR with work experience of ≥2 years were asked to identify aspects they consider as relevant when evaluating or treating clients in VR. Responses were linked to the ICF categories and compared with the Comprehensive ICF Core Set for VR. Sixty-two physical therapists from all 6 WHO world regions responded with 3,917 statements that were subsequently linked to 338 ICF categories. Fifteen (17%) of the 90 categories in the Comprehensive ICF Core Set for VR were confirmed by the physical therapists in the sample. Twenty-two additional ICF categories were identified that were not included in the Comprehensive ICF Core Set for VR. Vocational rehabilitation in physical therapy is not well defined in every country and might have resulted in the small sample size. Therefore, the results cannot be generalized to all physical therapists practicing in VR. The content validity of the ICF Core Set for VR is insufficient from solely a physical therapist perspective. The results of this study could be used to define a physical therapy-specific set of ICF categories to develop and guide physical therapist clinical practice in VR. © 2016 American Physical Therapy Association.

  9. An information theory account of cognitive control.

    PubMed

    Fan, Jin

    2014-01-01

    Our ability to efficiently process information and generate appropriate responses depends on the processes collectively called cognitive control. Despite a considerable focus in the literature on the cognitive control of information processing, neural mechanisms underlying control are still unclear, and have not been characterized by considering the quantity of information to be processed. A novel and comprehensive account of cognitive control is proposed using concepts from information theory, which is concerned with communication system analysis and the quantification of information. This account treats the brain as an information-processing entity where cognitive control and its underlying brain networks play a pivotal role in dealing with conditions of uncertainty. This hypothesis and theory article justifies the validity and properties of such an account and relates experimental findings to the frontoparietal network under the framework of information theory.

  10. A mechanistic modelling approach to polymer dissolution using magnetic resonance microimaging.

    PubMed

    Kaunisto, Erik; Abrahmsen-Alami, Susanna; Borgquist, Per; Larsson, Anette; Nilsson, Bernt; Axelsson, Anders

    2010-10-15

    In this paper a computationally efficient mathematical model describing the swelling and dissolution of a polyethylene oxide tablet is presented. The model was calibrated against polymer release, front position and water concentration profile data inside the gel layer, using two different diffusion models. The water concentration profiles were obtained from magnetic resonance microimaging data which, in addition to the previously used texture analysis method, can help to validate and discriminate between the mechanisms of swelling, diffusion and erosion in relation to the dissolution process. Critical parameters were identified through a comprehensive sensitivity analysis, and the effect of hydrodynamic shearing was investigated by using two different stirring rates. Good agreement was obtained between the experimental results and the model. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. Correlation between tunability and anisotropy in magnetoelectric voltage tunable inductor (VTI).

    PubMed

    Yan, Yongke; Geng, Liwei D; Zhang, Lujie; Gao, Xiangyu; Gollapudi, Sreenivasulu; Song, Hyun-Cheol; Dong, Shuxiang; Sanghadasa, Mohan; Ngo, Khai; Wang, Yu U; Priya, Shashank

    2017-11-22

    Electric field modulation of magnetic properties via magnetoelectric coupling in composite materials is of fundamental and technological importance for realizing tunable energy efficient electronics. Here we provide foundational analysis on magnetoelectric voltage tunable inductor (VTI) that exhibits extremely large inductance tunability of up to 1150% under moderate electric fields. This field dependence of inductance arises from the change of permeability, which correlates with the stress dependence of magnetic anisotropy. Through combination of analytical models that were validated by experimental results, comprehensive understanding of various anisotropies on the tunability of VTI is provided. Results indicate that inclusion of magnetic materials with low magnetocrystalline anisotropy is one of the most effective ways to achieve high VTI tunability. This study opens pathway towards design of tunable circuit components that exhibit field-dependent electronic behavior.

  12. Model-based prognostics for batteries which estimates useful life and uses a probability density function

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor)

    2012-01-01

    This invention develops a mathematical model to describe battery behavior during individual discharge cycles as well as over its cycle life. The basis for the form of the model has been linked to the internal processes of the battery and validated using experimental data. Effects of temperature and load current have also been incorporated into the model. Subsequently, the model has been used in a Particle Filtering framework to make predictions of remaining useful life for individual discharge cycles as well as for cycle life. The prediction performance was found to be satisfactory as measured by performance metrics customized for prognostics for a sample case. The work presented here provides initial steps towards a comprehensive health management solution for energy storage devices.

  13. A Comprehensive Review on the Predictive Performance of the Sheiner-Tozer and Derivative Equations for the Correction of Phenytoin Concentrations.

    PubMed

    Kiang, Tony K L; Ensom, Mary H H

    2016-04-01

    In settings where free phenytoin concentrations are not available, the Sheiner-Tozer equation-Corrected total phenytoin concentration = Observed total phenytoin concentration/[(0.2 × Albumin) + 0.1]; phenytoin in µg/mL, albumin in g/dL-and its derivative equations are commonly used to correct for altered phenytoin binding to albumin. The objective of this article was to provide a comprehensive and updated review on the predictive performance of these equations in various patient populations. A literature search of PubMed, EMBASE, and Google Scholar was conducted using combinations of the following terms: Sheiner-Tozer, Winter-Tozer, phenytoin, predictive equation, precision, bias, free fraction. All English-language articles up to November 2015 (excluding abstracts) were evaluated. This review shows the Sheiner-Tozer equation to be biased and imprecise in various critical care, head trauma, and general neurology patient populations. Factors contributing to bias and imprecision include the following: albumin concentration, free phenytoin assay temperature, experimental conditions (eg, timing of concentration sampling, steady-state dosing conditions), renal function, age, concomitant medications, and patient type. Although derivative equations using varying albumin coefficients have improved accuracy (without much improvement in precision) in intensive care and elderly patients, these equations still require further validation. Further experiments are also needed to yield derivative equations with good predictive performance in all populations as well as to validate the equations' impact on actual patient efficacy and toxicity outcomes. More complex, multivariate predictive equations may be required to capture all variables that can potentially affect phenytoin pharmacokinetics and clinical therapeutic outcomes. © The Author(s) 2016.

  14. The light spot test: Measuring anxiety in mice in an automated home-cage environment.

    PubMed

    Aarts, Emmeke; Maroteaux, Gregoire; Loos, Maarten; Koopmans, Bastijn; Kovačević, Jovana; Smit, August B; Verhage, Matthijs; Sluis, Sophie van der

    2015-11-01

    Behavioral tests of animals in a controlled experimental setting provide a valuable tool to advance understanding of genotype-phenotype relations, and to study the effects of genetic and environmental manipulations. To optimally benefit from the increasing numbers of genetically engineered mice, reliable high-throughput methods for comprehensive behavioral phenotyping of mice lines have become a necessity. Here, we describe the development and validation of an anxiety test, the light spot test, that allows for unsupervised, automated, high-throughput testing of mice in a home-cage system. This automated behavioral test circumvents bias introduced by pretest handling, and enables recording both baseline behavior and the behavioral test response over a prolonged period of time. We demonstrate that the light spot test induces a behavioral response in C57BL/6J mice. This behavior reverts to baseline when the aversive stimulus is switched off, and is blunted by treatment with the anxiolytic drug Diazepam, demonstrating predictive validity of the assay, and indicating that the observed behavioral response has a significant anxiety component. Also, we investigated the effectiveness of the light spot test as part of sequential testing for different behavioral aspects in the home-cage. Two learning tests, administered prior to the light spot test, affected the light spot test parameters. The light spot test is a novel, automated assay for anxiety-related high-throughput testing of mice in an automated home-cage environment, allowing for both comprehensive behavioral phenotyping of mice, and rapid screening of pharmacological compounds. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Evolving Relevance of Neuroproteomics in Alzheimer's Disease.

    PubMed

    Lista, Simone; Zetterberg, Henrik; O'Bryant, Sid E; Blennow, Kaj; Hampel, Harald

    2017-01-01

    Substantial progress in the understanding of the biology of Alzheimer's disease (AD) has been achieved over the past decades. The early detection and diagnosis of AD and other age-related neurodegenerative diseases, however, remain a challenging scientific frontier. Therefore, the comprehensive discovery (relating to all individual, converging or diverging biochemical disease mechanisms), development, validation, and qualification of standardized biological markers with diagnostic and prognostic functions with a precise performance profile regarding specificity, sensitivity, and positive and negative predictive value are warranted.Methodological innovations in the area of exploratory high-throughput technologies, such as sequencing, microarrays, and mass spectrometry-based analyses of proteins/peptides, have led to the generation of large global molecular datasets from a multiplicity of biological systems, such as biological fluids, cells, tissues, and organs. Such methodological progress has shifted the attention to the execution of hypothesis-independent comprehensive exploratory analyses (opposed to the classical hypothesis-driven candidate approach), with the aim of fully understanding the biological systems in physiology and disease as a whole. The systems biology paradigm integrates experimental biology with accurate and rigorous computational modelling to describe and foresee the dynamic features of biological systems. The use of dynamically evolving technological platforms, including mass spectrometry, in the area of proteomics has enabled to rush the process of biomarker discovery and validation for refining significantly the diagnosis of AD. Currently, proteomics-which is part of the systems biology paradigm-is designated as one of the dominant matured sciences needed for the effective exploratory discovery of prospective biomarker candidates expected to play an effective role in aiding the early detection, diagnosis, prognosis, and therapy development in AD.

  16. Development of the quality assessment model of EHR software in family medicine practices: research based on user satisfaction.

    PubMed

    Kralj, Damir; Kern, Josipa; Tonkovic, Stanko; Koncar, Miroslav

    2015-09-09

    Family medicine practices (FMPs) make the basis for the Croatian health care system. Use of electronic health record (EHR) software is mandatory and it plays an important role in running these practices, but important functional features still remain uneven and largely left to the will of the software developers. The objective of this study was to develop a novel and comprehensive model for functional evaluation of the EHR software in FMPs, based on current world standards, models and projects, as well as on actual user satisfaction and requirements. Based on previous theoretical and experimental research in this area, we made the initial framework model consisting of six basic categories as a base for online survey questionnaire. Family doctors assessed perceived software quality by using a five-point Likert-type scale. Using exploratory factor analysis and appropriate statistical methods over the collected data, the final optimal structure of the novel model was formed. Special attention was focused on the validity and quality of the novel model. The online survey collected a total of 384 cases. The obtained results indicate both the quality of the assessed software and the quality in use of the novel model. The intense ergonomic orientation of the novel measurement model was particularly emphasised. The resulting novel model is multiple validated, comprehensive and universal. It could be used to assess the user-perceived quality of almost all forms of the ambulatory EHR software and therefore useful to all stakeholders in this area of the health care informatisation.

  17. A recursive Bayesian approach for fatigue damage prognosis: An experimental validation at the reliability component level

    NASA Astrophysics Data System (ADS)

    Gobbato, Maurizio; Kosmatka, John B.; Conte, Joel P.

    2014-04-01

    Fatigue-induced damage is one of the most uncertain and highly unpredictable failure mechanisms for a large variety of mechanical and structural systems subjected to cyclic and random loads during their service life. A health monitoring system capable of (i) monitoring the critical components of these systems through non-destructive evaluation (NDE) techniques, (ii) assessing their structural integrity, (iii) recursively predicting their remaining fatigue life (RFL), and (iv) providing a cost-efficient reliability-based inspection and maintenance plan (RBIM) is therefore ultimately needed. In contribution to these objectives, the first part of the paper provides an overview and extension of a comprehensive reliability-based fatigue damage prognosis methodology — previously developed by the authors — for recursively predicting and updating the RFL of critical structural components and/or sub-components in aerospace structures. In the second part of the paper, a set of experimental fatigue test data, available in the literature, is used to provide a numerical verification and an experimental validation of the proposed framework at the reliability component level (i.e., single damage mechanism evolving at a single damage location). The results obtained from this study demonstrate (i) the importance and the benefits of a nearly continuous NDE monitoring system, (ii) the efficiency of the recursive Bayesian updating scheme, and (iii) the robustness of the proposed framework in recursively updating and improving the RFL estimations. This study also demonstrates that the proposed methodology can lead to either an extent of the RFL (with a consequent economical gain without compromising the minimum safety requirements) or an increase of safety by detecting a premature fault and therefore avoiding a very costly catastrophic failure.

  18. Analysis of l-glutamic acid fermentation by using a dynamic metabolic simulation model of Escherichia coli

    PubMed Central

    2013-01-01

    Background Understanding the process of amino acid fermentation as a comprehensive system is a challenging task. Previously, we developed a literature-based dynamic simulation model, which included transcriptional regulation, transcription, translation, and enzymatic reactions related to glycolysis, the pentose phosphate pathway, the tricarboxylic acid (TCA) cycle, and the anaplerotic pathway of Escherichia coli. During simulation, cell growth was defined such as to reproduce the experimental cell growth profile of fed-batch cultivation in jar fermenters. However, to confirm the biological appropriateness of our model, sensitivity analysis and experimental validation were required. Results We constructed an l-glutamic acid fermentation simulation model by removing sucAB, a gene encoding α-ketoglutarate dehydrogenase. We then performed systematic sensitivity analysis for l-glutamic acid production; the results of this process corresponded with previous experimental data regarding l-glutamic acid fermentation. Furthermore, it allowed us to predicted the possibility that accumulation of 3-phosphoglycerate in the cell would regulate the carbon flux into the TCA cycle and lead to an increase in the yield of l-glutamic acid via fermentation. We validated this hypothesis through a fermentation experiment involving a model l-glutamic acid-production strain, E. coli MG1655 ΔsucA in which the phosphoglycerate kinase gene had been amplified to cause accumulation of 3-phosphoglycerate. The observed increase in l-glutamic acid production verified the biologically meaningful predictive power of our dynamic metabolic simulation model. Conclusions In this study, dynamic simulation using a literature-based model was shown to be useful for elucidating the precise mechanisms involved in fermentation processes inside the cell. Further exhaustive sensitivity analysis will facilitate identification of novel factors involved in the metabolic regulation of amino acid fermentation. PMID:24053676

  19. Analysis of L-glutamic acid fermentation by using a dynamic metabolic simulation model of Escherichia coli.

    PubMed

    Nishio, Yousuke; Ogishima, Soichi; Ichikawa, Masao; Yamada, Yohei; Usuda, Yoshihiro; Masuda, Tadashi; Tanaka, Hiroshi

    2013-09-22

    Understanding the process of amino acid fermentation as a comprehensive system is a challenging task. Previously, we developed a literature-based dynamic simulation model, which included transcriptional regulation, transcription, translation, and enzymatic reactions related to glycolysis, the pentose phosphate pathway, the tricarboxylic acid (TCA) cycle, and the anaplerotic pathway of Escherichia coli. During simulation, cell growth was defined such as to reproduce the experimental cell growth profile of fed-batch cultivation in jar fermenters. However, to confirm the biological appropriateness of our model, sensitivity analysis and experimental validation were required. We constructed an L-glutamic acid fermentation simulation model by removing sucAB, a gene encoding α-ketoglutarate dehydrogenase. We then performed systematic sensitivity analysis for L-glutamic acid production; the results of this process corresponded with previous experimental data regarding L-glutamic acid fermentation. Furthermore, it allowed us to predicted the possibility that accumulation of 3-phosphoglycerate in the cell would regulate the carbon flux into the TCA cycle and lead to an increase in the yield of L-glutamic acid via fermentation. We validated this hypothesis through a fermentation experiment involving a model L-glutamic acid-production strain, E. coli MG1655 ΔsucA in which the phosphoglycerate kinase gene had been amplified to cause accumulation of 3-phosphoglycerate. The observed increase in L-glutamic acid production verified the biologically meaningful predictive power of our dynamic metabolic simulation model. In this study, dynamic simulation using a literature-based model was shown to be useful for elucidating the precise mechanisms involved in fermentation processes inside the cell. Further exhaustive sensitivity analysis will facilitate identification of novel factors involved in the metabolic regulation of amino acid fermentation.

  20. Reliability and validity of the C-BiLLT: a new instrument to assess comprehension of spoken language in young children with cerebral palsy and complex communication needs.

    PubMed

    Geytenbeek, Joke J; Mokkink, Lidwine B; Knol, Dirk L; Vermeulen, R Jeroen; Oostrom, Kim J

    2014-09-01

    In clinical practice, a variety of diagnostic tests are available to assess a child's comprehension of spoken language. However, none of these tests have been designed specifically for use with children who have severe motor impairments and who experience severe difficulty when using speech to communicate. This article describes the process of investigating the reliability and validity of the Computer-Based Instrument for Low Motor Language Testing (C-BiLLT), which was specifically developed to assess spoken Dutch language comprehension in children with cerebral palsy and complex communication needs. The study included 806 children with typical development, and 87 nonspeaking children with cerebral palsy and complex communication needs, and was designed to provide information on the psychometric qualities of the C-BiLLT. The potential utility of the C-BiLLT as a measure of spoken Dutch language comprehension abilities for children with cerebral palsy and complex communication needs is discussed.

  1. A comprehensive scoring system to measure healthy community design in land use plans and regulations.

    PubMed

    Maiden, Kristin M; Kaplan, Marina; Walling, Lee Ann; Miller, Patricia P; Crist, Gina

    2017-02-01

    Comprehensive land use plans and their corresponding regulations play a role in determining the nature of the built environment and community design, which are factors that influence population health and health disparities. To determine the level in which a plan addresses healthy living and active design, there is a need for a systematic, reliable and valid method of analyzing and scoring health-related content in plans and regulations. This paper describes the development and validation of a scoring tool designed to measure the strength and comprehensiveness of health-related content found in land use plans and the corresponding regulations. The measures are scored based on the presence of a specific item and the specificity and action-orientation of language. To establish reliability and validity, 42 land use plans and regulations from across the United States were scored January-April 2016. Results of the psychometric analysis indicate the scorecard is a reliable scoring tool for land use plans and regulations related to healthy living and active design. Intraclass correlation coefficients (ICC) scores showed strong inter-rater reliability for total strength and comprehensiveness. ICC scores for total implementation scores showed acceptable consistency among scorers. Cronbach's alpha values for all focus areas were acceptable. Strong content validity was measured through a committee vetting process. The development of this tool has far-reaching implications, bringing standardization of measurement to the field of land use plan assessment, and paving the way for systematic inclusion of health-related design principles, policies, and requirements in land use plans and their corresponding regulations. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. The Effects of Self-Regulation Strategies on Reading Comprehension, Motivation for Learning, and Self-Efficacy with Struggling Readers

    ERIC Educational Resources Information Center

    Cosentino, Cassandra L.

    2017-01-01

    The purpose of this quasi-experimental study was to investigate the effect of a self-regulation treatment on sixth grade students' reading comprehension, motivation for learning, and self-efficacy perceptions. The research took place in three urban schools in the northeast United States in the winter of 2016. The study's quasi-experimental design…

  3. The Impact of Cross-Age Peer Tutoring on Third and Sixth Graders' Reading Strategy Awareness, Reading Strategy Use, and Reading Comprehension

    ERIC Educational Resources Information Center

    Van Keer, Hilde; Vanderlinde, Ruben

    2010-01-01

    The present study explores the impact of an experimental reading intervention focusing on explicit reading strategy instruction and cross-age peer tutoring on third and sixth graders' reading strategy awareness, cognitive and metacognitive reading strategy use, and reading comprehension achievement. A quasi-experimental pretest-posttest design was…

  4. The Effectiveness of the Barton’s Intervention Program on Reading Comprehension and Reading Attitude of Students with Dyslexia

    PubMed Central

    Mihandoost, Zeinab; Elias, Habibah

    2011-01-01

    Objective: The current research tested the differences in reading attitude and reading comprehension in the dyslexic students between the control group and the experimental group following the Barton intervention program. Methods: Dyslexia screening instrument and reading text were employed in order to identify dyslexic students. The population of the study included 138 dyslexic students studying in schools in Ilam, Iran. From this population, 64 students were randomly selected and assigned to an experimental group as well as a control group. The experimental group was taught for 36 sessions, using the Barton’s method at two levels, and ten lessons were provided to improve the reading skill. The reading comprehension and reading attitude instruments were employed for the measurement of the attitude and comprehension before and after the intervention program. Results: The analysis of covariance showed a significant difference between the control group and the experimental group following the Barton intervention program. Conclusion: This study showed that dyslexic students learned to read, and a more direct instruction related to decoding could influence their progress more than the general exposure to education. PMID:24644446

  5. 'Mechanical restraint-confounders, risk, alliance score': testing the clinical validity of a new risk assessment instrument.

    PubMed

    Deichmann Nielsen, Lea; Bech, Per; Hounsgaard, Lise; Alkier Gildberg, Frederik

    2017-08-01

    Unstructured risk assessment, as well as confounders (underlying reasons for the patient's risk behaviour and alliance), risk behaviour, and parameters of alliance, have been identified as factors that prolong the duration of mechanical restraint among forensic mental health inpatients. To clinically validate a new, structured short-term risk assessment instrument called the Mechanical Restraint-Confounders, Risk, Alliance Score (MR-CRAS), with the intended purpose of supporting the clinicians' observation and assessment of the patient's readiness to be released from mechanical restraint. The content and layout of MR-CRAS and its user manual were evaluated using face validation by forensic mental health clinicians, content validation by an expert panel, and pilot testing within two, closed forensic mental health inpatient units. The three sub-scales (Confounders, Risk, and a parameter of Alliance) showed excellent content validity. The clinical validations also showed that MR-CRAS was perceived and experienced as a comprehensible, relevant, comprehensive, and useable risk assessment instrument. MR-CRAS contains 18 clinically valid items, and the instrument can be used to support the clinical decision-making regarding the possibility of releasing the patient from mechanical restraint. The present three studies have clinically validated a short MR-CRAS scale that is currently being psychometrically tested in a larger study.

  6. Validation through Understanding Test-Taking Strategies: An Illustration With the CELPIP-General Reading Pilot Test Using Structural Equation Modeling

    ERIC Educational Resources Information Center

    Wu, Amery D.; Stone, Jake E.

    2016-01-01

    This article explores an approach for test score validation that examines test takers' strategies for taking a reading comprehension test. The authors formulated three working hypotheses about score validity pertaining to three types of test-taking strategy (comprehending meaning, test management, and test-wiseness). These hypotheses were…

  7. Elaborations for the Validation of Causal Bridging Inferences in Text Comprehension

    ERIC Educational Resources Information Center

    Morishima, Yasunori

    2016-01-01

    The validation model of causal bridging inferences proposed by Singer and colleagues (e.g., Singer in "Can J Exp Psychol," 47(2):340-359, 1993) claims that before a causal bridging inference is accepted, it must be validated by existing knowledge. For example, to understand "Dorothy took the aspirins. Her pain went away," one…

  8. Experimental validation of structural optimization methods

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.

    1992-01-01

    The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.

  9. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators

    NASA Astrophysics Data System (ADS)

    Yim, Keun Soo

    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of program states that included dynamically allocated memory (to be spatially comprehensive). In GPUs, we used fault injection studies to demonstrate the importance of detecting silent data corruption (SDC) errors that are mainly due to the lack of fine-grained protections and the massive use of fault-insensitive data. This dissertation also presents transparent fault tolerance frameworks and techniques that are directly applicable to hybrid computers built using only commercial off-the-shelf hardware components. This dissertation shows that by developing understanding of the failure characteristics and error propagation paths of target programs, we were able to create fault tolerance frameworks and techniques that can quickly detect and recover from hardware faults with low performance and hardware overheads.

  10. MO-AB-BRA-02: A Novel Scatter Imaging Modality for Real-Time Image Guidance During Lung SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Redler, G; Bernard, D; Templeton, A

    2015-06-15

    Purpose: A novel scatter imaging modality is developed and its feasibility for image-guided radiation therapy (IGRT) during stereotactic body radiation therapy (SBRT) for lung cancer patients is assessed using analytic and Monte Carlo models as well as experimental testing. Methods: During treatment, incident radiation interacts and scatters from within the patient. The presented methodology forms an image of patient anatomy from the scattered radiation for real-time localization of the treatment target. A radiographic flat panel-based pinhole camera provides spatial information regarding the origin of detected scattered radiation. An analytical model is developed, which provides a mathematical formalism for describing themore » scatter imaging system. Experimental scatter images are acquired by irradiating an object using a Varian TrueBeam accelerator. The differentiation between tissue types is investigated by imaging simple objects of known compositions (water, lung, and cortical bone equivalent). A lung tumor phantom, simulating materials and geometry encountered during lung SBRT treatments, is fabricated and imaged to investigate image quality for various quantities of delivered radiation. Monte Carlo N-Particle (MCNP) code is used for validation and testing by simulating scatter image formation using the experimental pinhole camera setup. Results: Analytical calculations, MCNP simulations, and experimental results when imaging the water, lung, and cortical bone equivalent objects show close agreement, thus validating the proposed models and demonstrating that scatter imaging differentiates these materials well. Lung tumor phantom images have sufficient contrast-to-noise ratio (CNR) to clearly distinguish tumor from surrounding lung tissue. CNR=4.1 and CNR=29.1 for 10MU and 5000MU images (equivalent to 0.5 and 250 second images), respectively. Conclusion: Lung SBRT provides favorable treatment outcomes, but depends on accurate target localization. A comprehensive approach, employing multiple simulation techniques and experiments, is taken to demonstrate the feasibility of a novel scatter imaging modality for the necessary real-time image guidance.« less

  11. An Approach to a Comprehensive Test Framework for Analysis and Evaluation of Text Line Segmentation Algorithms

    PubMed Central

    Brodic, Darko; Milivojevic, Dragan R.; Milivojevic, Zoran N.

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures. PMID:22164106

  12. A comprehensive equivalent circuit model of all-vanadium redox flow battery for power system analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Zhao, Jiyun; Wang, Peng; Skyllas-Kazacos, Maria; Xiong, Binyu; Badrinarayanan, Rajagopalan

    2015-09-01

    Electrical equivalent circuit models demonstrate excellent adaptability and simplicity in predicting the electrical dynamic response of the all-vanadium redox flow battery (VRB) system. However, only a few publications that focus on this topic are available. The paper presents a comprehensive equivalent circuit model of VRB for system level analysis. The least square method is used to identify both steady-state and dynamic characteristics of VRB. The inherent features of the flow battery such as shunt current, ion diffusion and pumping energy consumption are also considered. The proposed model consists of an open-circuit voltage source, two parasitic shunt bypass circuits, a 1st order resistor-capacitor network and a hydraulic circuit model. Validated with experimental data, the proposed model demonstrates excellent accuracy. The mean-error of terminal voltage and pump consumption are 0.09 V and 0.49 W respectively. Based on the proposed model, self-discharge and system efficiency are studied. An optimal flow rate which maximizes the system efficiency is identified. Finally, the dynamic responses of the proposed VRB model under step current profiles are presented. Variables such as SOC and stack terminal voltage can be provided.

  13. 2-D and 3-D oscillating wing aerodynamics for a range of angles of attack including stall

    NASA Technical Reports Server (NTRS)

    Piziali, R. A.

    1994-01-01

    A comprehensive experimental investigation of the pressure distribution over a semispan wing undergoing pitching motions representative of a helicopter rotor blade was conducted. Testing the wing in the nonrotating condition isolates the three-dimensional (3-D) blade aerodynamic and dynamic stall characteristics from the complications of the rotor blade environment. The test has generated a very complete, detailed, and accurate body of data. These data include static and dynamic pressure distributions, surface flow visualizations, two-dimensional (2-D) airfoil data from the same model and installation, and important supporting blockage and wall pressure distributions. This body of data is sufficiently comprehensive and accurate that it can be used for the validation of rotor blade aerodynamic models over a broad range of the important parameters including 3-D dynamic stall. This data report presents all the cycle-averaged lift, drag, and pitching moment coefficient data versus angle of attack obtained from the instantaneous pressure data for the 3-D wing and the 2-D airfoil. Also presented are examples of the following: cycle-to-cycle variations occurring for incipient or lightly stalled conditions; 3-D surface flow visualizations; supporting blockage and wall pressure distributions; and underlying detailed pressure results.

  14. An approach to a comprehensive test framework for analysis and evaluation of text line segmentation algorithms.

    PubMed

    Brodic, Darko; Milivojevic, Dragan R; Milivojevic, Zoran N

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.

  15. Envelope analysis of rotating machine vibrations in variable speed conditions: A comprehensive treatment

    NASA Astrophysics Data System (ADS)

    Abboud, D.; Antoni, J.; Sieg-Zieba, S.; Eltabach, M.

    2017-02-01

    Nowadays, the vibration analysis of rotating machine signals is a well-established methodology, rooted on powerful tools offered, in particular, by the theory of cyclostationary (CS) processes. Among them, the squared envelope spectrum (SES) is probably the most popular to detect random CS components which are typical symptoms, for instance, of rolling element bearing faults. Recent researches are shifted towards the extension of existing CS tools - originally devised in constant speed conditions - to the case of variable speed conditions. Many of these works combine the SES with computed order tracking after some preprocessing steps. The principal object of this paper is to organize these dispersed researches into a structured comprehensive framework. Three original features are furnished. First, a model of rotating machine signals is introduced which sheds light on the various components to be expected in the SES. Second, a critical comparison is made of three sophisticated methods, namely, the improved synchronous average, the cepstrum prewhitening, and the generalized synchronous average, used for suppressing the deterministic part. Also, a general envelope enhancement methodology which combines the latter two techniques with a time-domain filtering operation is revisited. All theoretical findings are experimentally validated on simulated and real-world vibration signals.

  16. Comprehensive tire-road friction coefficient estimation based on signal fusion method under complex maneuvering operations

    NASA Astrophysics Data System (ADS)

    Li, L.; Yang, K.; Jia, G.; Ran, X.; Song, J.; Han, Z.-Q.

    2015-05-01

    The accurate estimation of the tire-road friction coefficient plays a significant role in the vehicle dynamics control. The estimation method should be timely and reliable for the controlling requirements, which means the contact friction characteristics between the tire and the road should be recognized before the interference to ensure the safety of the driver and passengers from drifting and losing control. In addition, the estimation method should be stable and feasible for complex maneuvering operations to guarantee the control performance as well. A signal fusion method combining the available signals to estimate the road friction is suggested in this paper on the basis of the estimated ones of braking, driving and steering conditions individually. Through the input characteristics and the states of the vehicle and tires from sensors the maneuvering condition may be recognized, by which the certainty factors of the friction of the three conditions mentioned above may be obtained correspondingly, and then the comprehensive road friction may be calculated. Experimental vehicle tests validate the effectiveness of the proposed method through complex maneuvering operations; the estimated road friction coefficient based on the signal fusion method is relatively timely and accurate to satisfy the control demands.

  17. Documentation of validity for the AT-SAT computerized test battery. Volume 2

    DOT National Transportation Integrated Search

    2001-03-01

    This document is a comprehensive report on a large-scale research project to develop and validate a : computerized selection battery to hire Air Traffic Control Specialists (ATCSs) for the Federal Aviation : Administration (FAA). The purpose of this ...

  18. Documentation of validity for the AT-SAT computerized test battery. Volume 1

    DOT National Transportation Integrated Search

    2001-03-01

    This document is a comprehensive report on a large-scale research project to develop and validate a : computerized selection battery to hire Air Traffic Control Specialists (ATCSs) for the Federal Aviation : Administration (FAA). The purpose of this ...

  19. Personal autonomy and the flexible school

    NASA Astrophysics Data System (ADS)

    Aviram, Aharon

    1993-09-01

    The paper starts by emphasizing the importance of the ideal of personal autonomy as a central educational aim within liberal-democratic thought. Although this ideal has been accorded different meanings in the past 200 years, this paper focuses on J. S. Mill's view of autonomy - a very influential view within the liberal tradition and one still relevant for us today. The basic educational recommendation stemming from Mill's view of this ideal is the need to encourage "experimentation in living" by young people to enable them to discover their authentic wishes, capabilities and interests and to exercise themselves in the formation of "life-plans". The paper points to the sharp contradiction between democratic educational thought and practice: between the ideal of autonomy and the prevailing rigid and closed school structure which usually prevents true experimentation in living. It explains this contradiction as stemming from didactic and social considerations that were valid in industrial democratic societies. The paper's main claim is that due to the electronic revolution and its social consequences, the validity of these considerations is drastically and rapidly eroding in post-industrial democratic societies, and that, therefore, a much more flexible and open school structure is today not only desirable but also didactically and socially possible. The paper ends by presenting the "School as a Communications Center" model of a flexible school that reflects the above rationale. This model is now in the first stage of its implementation at a comprehensive high-school in Beer-Sheva, Israel.

  20. Laboratory longitudinal diffusion tests: 1. Dimensionless formulations and validity of simplified solutions

    NASA Astrophysics Data System (ADS)

    Takeda, M.; Nakajima, H.; Zhang, M.; Hiratsuka, T.

    2008-04-01

    To obtain reliable diffusion parameters for diffusion testing, multiple experiments should not only be cross-checked but the internal consistency of each experiment should also be verified. In the through- and in-diffusion tests with solution reservoirs, test interpretation of different phases often makes use of simplified analytical solutions. This study explores the feasibility of steady, quasi-steady, equilibrium and transient-state analyses using simplified analytical solutions with respect to (i) valid conditions for each analytical solution, (ii) potential error, and (iii) experimental time. For increased generality, a series of numerical analyses are performed using unified dimensionless parameters and the results are all related to dimensionless reservoir volume (DRV) which includes only the sorptive parameter as an unknown. This means the above factors can be investigated on the basis of the sorption properties of the testing material and/or tracer. The main findings are that steady, quasi-steady and equilibrium-state analyses are applicable when the tracer is not highly sorptive. However, quasi-steady and equilibrium-state analyses become inefficient or impractical compared to steady state analysis when the tracer is non-sorbing and material porosity is significantly low. Systematic and comprehensive reformulation of analytical models enables the comparison of experimental times between different test methods. The applicability and potential error of each test interpretation can also be studied. These can be applied in designing, performing, and interpreting diffusion experiments by deducing DRV from the available information for the target material and tracer, combined with the results of this study.

  1. Waste tyre pyrolysis: modelling of a moving bed reactor.

    PubMed

    Aylón, E; Fernández-Colino, A; Murillo, R; Grasa, G; Navarro, M V; García, T; Mastral, A M

    2010-12-01

    This paper describes the development of a new model for waste tyre pyrolysis in a moving bed reactor. This model comprises three different sub-models: a kinetic sub-model that predicts solid conversion in terms of reaction time and temperature, a heat transfer sub-model that calculates the temperature profile inside the particle and the energy flux from the surroundings to the tyre particles and, finally, a hydrodynamic model that predicts the solid flow pattern inside the reactor. These three sub-models have been integrated in order to develop a comprehensive reactor model. Experimental results were obtained in a continuous moving bed reactor and used to validate model predictions, with good approximation achieved between the experimental and simulated results. In addition, a parametric study of the model was carried out, which showed that tyre particle heating is clearly faster than average particle residence time inside the reactor. Therefore, this fast particle heating together with fast reaction kinetics enables total solid conversion to be achieved in this system in accordance with the predictive model. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Design optimization of beta- and photovoltaic conversion devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wichner, R.; Blum, A.; Fischer-Colbrie, E.

    1976-01-08

    This report presents the theoretical and experimental results of an LLL Electronics Engineering research program aimed at optimizing the design and electronic-material parameters of beta- and photovoltaic p-n junction conversion devices. To meet this objective, a comprehensive computer code has been developed that can handle a broad range of practical conditions. The physical model upon which the code is based is described first. Then, an example is given of a set of optimization calculations along with the resulting optimized efficiencies for silicon (Si) and gallium-arsenide (GaAs) devices. The model we have developed, however, is not limited to these materials. Itmore » can handle any appropriate material--single or polycrystalline-- provided energy absorption and electron-transport data are available. To check code validity, the performance of experimental silicon p-n junction devices (produced in-house) were measured under various light intensities and spectra as well as under tritium beta irradiation. The results of these tests were then compared with predicted results based on the known or best estimated device parameters. The comparison showed very good agreement between the calculated and the measured results.« less

  3. Jet-Surface Interaction Test: Flow Measurements Results

    NASA Technical Reports Server (NTRS)

    Brown, Cliff; Wernet, Mark

    2014-01-01

    Modern aircraft design often puts the engine exhaust in close proximity to the airframe surfaces. Aircraft noise prediction tools must continue to develop in order to meet the challenges these aircraft present. The Jet-Surface Interaction Tests have been conducted to provide a comprehensive quality set of experimental data suitable for development and validation of these exhaust noise prediction methods. Flow measurements have been acquired using streamwise and cross-stream particle image velocimetry (PIV) and fluctuating surface pressure data acquired using flush mounted pressure transducers near the surface trailing edge. These data combined with previously reported far-field and phased array noise measurements represent the first step toward the experimental data base. These flow data are particularly applicable to development of noise prediction methods which rely on computational fluid dynamics to uncover the flow physics. A representative sample of the large flow data set acquired is presented here to show how a surface near a jet affects the turbulent kinetic energy in the plume, the spatial relationship between the jet plume and surface needed to generate surface trailing-edge noise, and differences between heated and unheated jet flows with respect to surfaces.

  4. 3D optimization of a polymer MOEMS for active focusing of VCSEL beam

    NASA Astrophysics Data System (ADS)

    Abada, S.; Camps, T.; Reig, B.; Doucet, JB; Daran, E.; Bardinal, V.

    2014-05-01

    We report on the optimized design of a polymer-based actuator that can be directly integrated on a VCSEL for vertical beam scanning. Its operation principle is based on the vertical displacement of a SU-8 membrane including a polymer microlens. Under an applied thermal gradient, the membrane is shifted vertically due to thermal expansion in the actuation arms induced by Joule effect. This leads to a modification of microlens position and thus to a vertical scan of the laser beam. Membrane vertical displacements as high as 8μm for only 3V applied were recently experimentally obtained. To explain these performances, we developed a comprehensive tri-dimensional thermo-mechanical model that takes into account SU-8 material properties and precise MOEMS geometry. Out-of-plane mechanical coefficients and thermal conductivity were thus integrated in our 3D model (COMSOL Multiphysics). Vertical displacements extracted from these data for different actuation powers were successfully compared to experimental values, validating this modelling tool. Thereby, it was exploited to increase MOEMS electrothermal performance by a factor higher than 5.

  5. Survey of the supporting research and technology for the thermal protection of the Galileo Probe

    NASA Technical Reports Server (NTRS)

    Howe, J. T.; Pitts, W. C.; Lundell, J. H.

    1981-01-01

    The Galileo Probe, which is scheduled to be launched in 1985 and to enter the hydrogen-helium atmosphere of Jupiter up to 1,475 days later, presents thermal protection problems that are far more difficult than those experienced in previous planetary entry missions. The high entry speed of the Probe will cause forebody heating rates orders of magnitude greater than those encountered in the Apollo and Pioneer Venus missions, severe afterbody heating from base-flow radiation, and thermochemical ablation rates for carbon phenolic that rival the free-stream mass flux. This paper presents a comprehensive survey of the experimental work and computational research that provide technological support for the Probe's heat-shield design effort. The survey includes atmospheric modeling; both approximate and first-principle computations of flow fields and heat-shield material response; base heating; turbulence modelling; new computational techniques; experimental heating and materials studies; code validation efforts; and a set of 'consensus' first-principle flow-field solutions through the entry maneuver, with predictions of the corresponding thermal protection requirements.

  6. An improved model of homogeneous nucleation for high supersaturation conditions: aluminum vapor.

    PubMed

    Savel'ev, A M; Starik, A M

    2016-12-21

    A novel model of stationary nucleation, treating the thermodynamic functions of small clusters, has been built. The model is validated against the experimental data on the nucleation rate of water vapor obtained in a broad range of supersaturation values (S = 10-120), and, at high supersaturation values, it reproduces the experimental data much better than the traditional classical nucleation model. A comprehensive analysis of the nucleation of aluminum vapor with the usage of developed stationary and non-stationary nucleation models has been performed. It has been shown that, at some value of supersaturation, there exists a double potential nucleation barrier. It has been revealed that the existence of this barrier notably delayed the establishment of a stationary distribution of subcritical clusters. It has also been demonstrated that the non-stationary model of the present work and the model of liquid-droplet approximation predict different values of nucleation delay time, τ s . In doing so, the liquid-droplet model can underestimate notably (by more than an order of magnitude) the value of τ s .

  7. Validity and Reliability of Published Comprehensive Theory of Mind Tests for Normal Preschool Children: A Systematic Review.

    PubMed

    Ziatabar Ahmadi, Seyyede Zohreh; Jalaie, Shohreh; Ashayeri, Hassan

    2015-09-01

    Theory of mind (ToM) or mindreading is an aspect of social cognition that evaluates mental states and beliefs of oneself and others. Validity and reliability are very important criteria when evaluating standard tests; and without them, these tests are not usable. The aim of this study was to systematically review the validity and reliability of published English comprehensive ToM tests developed for normal preschool children. We searched MEDLINE (PubMed interface), Web of Science, Science direct, PsycINFO, and also evidence base Medicine (The Cochrane Library) databases from 1990 to June 2015. Search strategy was Latin transcription of 'Theory of Mind' AND test AND children. Also, we manually studied the reference lists of all final searched articles and carried out a search of their references. Inclusion criteria were as follows: Valid and reliable diagnostic ToM tests published from 1990 to June 2015 for normal preschool children; and exclusion criteria were as follows: the studies that only used ToM tests and single tasks (false belief tasks) for ToM assessment and/or had no description about structure, validity or reliability of their tests. METHODological quality of the selected articles was assessed using the Critical Appraisal Skills Programme (CASP). In primary searching, we found 1237 articles in total databases. After removing duplicates and applying all inclusion and exclusion criteria, we selected 11 tests for this systematic review. There were a few valid, reliable and comprehensive ToM tests for normal preschool children. However, we had limitations concerning the included articles. The defined ToM tests were different in populations, tasks, mode of presentations, scoring, mode of responses, times and other variables. Also, they had various validities and reliabilities. Therefore, it is recommended that the researchers and clinicians select the ToM tests according to their psychometric characteristics, validity and reliability.

  8. Validity and Reliability of Published Comprehensive Theory of Mind Tests for Normal Preschool Children: A Systematic Review

    PubMed Central

    Ziatabar Ahmadi, Seyyede Zohreh; Jalaie, Shohreh; Ashayeri, Hassan

    2015-01-01

    Objective: Theory of mind (ToM) or mindreading is an aspect of social cognition that evaluates mental states and beliefs of oneself and others. Validity and reliability are very important criteria when evaluating standard tests; and without them, these tests are not usable. The aim of this study was to systematically review the validity and reliability of published English comprehensive ToM tests developed for normal preschool children. Method: We searched MEDLINE (PubMed interface), Web of Science, Science direct, PsycINFO, and also evidence base Medicine (The Cochrane Library) databases from 1990 to June 2015. Search strategy was Latin transcription of ‘Theory of Mind’ AND test AND children. Also, we manually studied the reference lists of all final searched articles and carried out a search of their references. Inclusion criteria were as follows: Valid and reliable diagnostic ToM tests published from 1990 to June 2015 for normal preschool children; and exclusion criteria were as follows: the studies that only used ToM tests and single tasks (false belief tasks) for ToM assessment and/or had no description about structure, validity or reliability of their tests. Methodological quality of the selected articles was assessed using the Critical Appraisal Skills Programme (CASP). Result: In primary searching, we found 1237 articles in total databases. After removing duplicates and applying all inclusion and exclusion criteria, we selected 11 tests for this systematic review. Conclusion: There were a few valid, reliable and comprehensive ToM tests for normal preschool children. However, we had limitations concerning the included articles. The defined ToM tests were different in populations, tasks, mode of presentations, scoring, mode of responses, times and other variables. Also, they had various validities and reliabilities. Therefore, it is recommended that the researchers and clinicians select the ToM tests according to their psychometric characteristics, validity and reliability. PMID:27006666

  9. Effects of Gender and School Location on the Ekiti State Secondary Schools Students' Achievement in Reading Comprehension in English Language

    ERIC Educational Resources Information Center

    Akinwumi, Julius Olaitan

    2017-01-01

    The purpose of this study was to find out the effects of gender and school location on the Ekiti State secondary school students achievement in reading comprehension in English language. The study adopted pre-test, post-test and control quasi-experimental research using two experimental groups and one control group. The sample for the study…

  10. Improving Language-Focused Comprehension Instruction in Primary-Grade Classrooms: Impacts of the "Let's Know!" Experimental Curriculum

    ERIC Educational Resources Information Center

    Justice, Laura M.; Pratt, Amy; Logan, Jessica; Gray, Shelley

    2014-01-01

    This quasi-experimental study was designed to test the impacts of a curriculum supplement, "Let's Know!", on the quantity and quality of language-focused comprehension instruction in pre-kindergarten to third grade classrooms. Sixty classrooms (12 per each of pre-K to grade 3) were enrolled in the study, with 40 teachers assigned to…

  11. The Effects of Phonological Short-Term Memory and Speech Perception on Spoken Sentence Comprehension in Children: Simulating Deficits in an Experimental Design

    ERIC Educational Resources Information Center

    Higgins, Meaghan C.; Penney, Sarah B.; Robertson, Erin K.

    2017-01-01

    The roles of phonological short-term memory (pSTM) and speech perception in spoken sentence comprehension were examined in an experimental design. Deficits in pSTM and speech perception were simulated through task demands while typically-developing children (N = 71) completed a sentence-picture matching task. Children performed the control,…

  12. Psychometric Properties of the "Miranda Rights Comprehension Instruments" with a Juvenile Justice Sample

    ERIC Educational Resources Information Center

    Goldstein, Naomi E. Sevin; Romaine, Christina L. Riggs; Zelle, Heather; Kalbeitzer, Rachel; Mesiarik, Constance; Wolbransky, Melinda

    2011-01-01

    This article describes the psychometric properties of the "Miranda Rights Comprehension Instruments", the revised version of Grisso's "Miranda" instruments. The original instruments demonstrated good reliability and validity in a normative sample. The revised instruments updated the content of the original instruments and were…

  13. Passage Independence within Standardized Reading Comprehension Tests

    ERIC Educational Resources Information Center

    Roy-Charland, Annie; Colangelo, Gabrielle; Foglia, Victoria; Reguigui, Leïla

    2017-01-01

    In tests used to measure reading comprehension, validity is important in obtaining accurate results. Unfortunately, studies have shown that people can correctly answer some questions of these tests without reading the related passage. These findings bring forth the need to address whether this phenomenon is observed in multiple-choice only tests…

  14. Classification of Students with Reading Comprehension Difficulties: The Roles of Motivation, Affect, and Psychopathology

    ERIC Educational Resources Information Center

    Sideridis, Georgios D.; Mouzaki, Angeliki; Simos, Panagiotis; Protopapas, Athanassios

    2006-01-01

    Attempts to evaluate the cognitive-motivational profiles of students with reading comprehension difficulties have been scarce. The purpose of the present study was twofold: (a) to assess the discriminatory validity of cognitive, motivational, affective, and psychopathological variables for identification of students with reading difficulties, and…

  15. China’s Comprehensive Approach: Refining the U.S. Targeting Process to Inform U.S. Strategy

    DTIC Science & Technology

    2018-04-20

    control demonstrated by China, the subject matter expertise required to generate a comprehensive approach like China’s does exist. However, due to a vast...with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1...code) NATIONAL DEFENSE UNIVERSITY JOINT FORCES STAFF COLLEGE JOINT ADVANCED WARFIGHTING SCHOOL CHINA’S COMPREHENSIVE APPROACH

  16. Predicting implementation from organizational readiness for change: a study protocol

    PubMed Central

    2011-01-01

    Background There is widespread interest in measuring organizational readiness to implement evidence-based practices in clinical care. However, there are a number of challenges to validating organizational measures, including inferential bias arising from the halo effect and method bias - two threats to validity that, while well-documented by organizational scholars, are often ignored in health services research. We describe a protocol to comprehensively assess the psychometric properties of a previously developed survey, the Organizational Readiness to Change Assessment. Objectives Our objective is to conduct a comprehensive assessment of the psychometric properties of the Organizational Readiness to Change Assessment incorporating methods specifically to address threats from halo effect and method bias. Methods and Design We will conduct three sets of analyses using longitudinal, secondary data from four partner projects, each testing interventions to improve the implementation of an evidence-based clinical practice. Partner projects field the Organizational Readiness to Change Assessment at baseline (n = 208 respondents; 53 facilities), and prospectively assesses the degree to which the evidence-based practice is implemented. We will conduct predictive and concurrent validities using hierarchical linear modeling and multivariate regression, respectively. For predictive validity, the outcome is the change from baseline to follow-up in the use of the evidence-based practice. We will use intra-class correlations derived from hierarchical linear models to assess inter-rater reliability. Two partner projects will also field measures of job satisfaction for convergent and discriminant validity analyses, and will field Organizational Readiness to Change Assessment measures at follow-up for concurrent validity (n = 158 respondents; 33 facilities). Convergent and discriminant validities will test associations between organizational readiness and different aspects of job satisfaction: satisfaction with leadership, which should be highly correlated with readiness, versus satisfaction with salary, which should be less correlated with readiness. Content validity will be assessed using an expert panel and modified Delphi technique. Discussion We propose a comprehensive protocol for validating a survey instrument for assessing organizational readiness to change that specifically addresses key threats of bias related to halo effect, method bias and questions of construct validity that often go unexplored in research using measures of organizational constructs. PMID:21777479

  17. Quantitative characterization of galectin-3-C affinity mass spectrometry measurements: Comprehensive data analysis, obstacles, shortcuts and robustness.

    PubMed

    Haramija, Marko; Peter-Katalinić, Jasna

    2017-10-30

    Affinity mass spectrometry (AMS) is an emerging tool in the field of the study of protein•carbohydrate complexes. However, experimental obstacles and data analysis are preventing faster integration of AMS methods into the glycoscience field. Here we show how analysis of direct electrospray ionization mass spectrometry (ESI-MS) AMS data can be simplified for screening purposes, even for complex AMS spectra. A direct ESI-MS assay was tested in this study and binding data for the galectin-3C•lactose complex were analyzed using a comprehensive and simplified data analysis approach. In the comprehensive data analysis approach, noise, all protein charge states, alkali ion adducts and signal overlap were taken into account. In a simplified approach, only the intensities of the fully protonated free protein and the protein•carbohydrate complex for the main protein charge state were taken into account. In our study, for high intensity signals, noise was negligible, sodiated protein and sodiated complex signals cancelled each other out when calculating the K d value, and signal overlap influenced the Kd value only to a minor extent. Influence of these parameters on low intensity signals was much higher. However, low intensity protein charge states should be avoided in quantitative AMS analyses due to poor ion statistics. The results indicate that noise, alkali ion adducts, signal overlap, as well as low intensity protein charge states, can be neglected for preliminary experiments, as well as in screening assays. One comprehensive data analysis performed as a control should be sufficient to validate this hypothesis for other binding systems as well. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Adaptation and Validation of the Brazilian Version of the Hope Index

    ERIC Educational Resources Information Center

    Pacico, Juliana Cerentini; Zanon, Cristian; Bastianello, Micheline Roat; Reppold, Caroline Tozzi; Hutz, Claudio Simon

    2013-01-01

    The objective of this study was to adapt and gather validity evidence for a Brazilian sample version of the Hope Index and to verify if cultural differences would produce different results than those found in the United States. In this study, we present a set of analyses that together comprise a comprehensive validity argument for the use of a…

  19. DESCQA: Synthetic Sky Catalog Validation Framework

    NASA Astrophysics Data System (ADS)

    Mao, Yao-Yuan; Uram, Thomas D.; Zhou, Rongpu; Kovacs, Eve; Ricker, Paul M.; Kalmbach, J. Bryce; Padilla, Nelson; Lanusse, François; Zu, Ying; Tenneti, Ananth; Vikraman, Vinu; DeRose, Joseph

    2018-04-01

    The DESCQA framework provides rigorous validation protocols for assessing the quality of high-quality simulated sky catalogs in a straightforward and comprehensive way. DESCQA enables the inspection, validation, and comparison of an inhomogeneous set of synthetic catalogs via the provision of a common interface within an automated framework. An interactive web interface is also available at portal.nersc.gov/project/lsst/descqa.

  20. Reorienting adolescent sexual and reproductive health research: reflections from an international conference.

    PubMed

    Michielsen, Kristien; De Meyer, Sara; Ivanova, Olena; Anderson, Ragnar; Decat, Peter; Herbiet, Céline; Kabiru, Caroline W; Ketting, Evert; Lees, James; Moreau, Caroline; Tolman, Deborah L; Vanwesenbeeck, Ine; Vega, Bernardo; Verhetsel, Elizabeth; Chandra-Mouli, Venkatraman

    2016-01-13

    On December 4th 2014, the International Centre for Reproductive Health (ICRH) at Ghent University organized an international conference on adolescent sexual and reproductive health (ASRH) and well-being. This viewpoint highlights two key messages of the conference--(1) ASRH promotion is broadening on different levels and (2) this broadening has important implications for research and interventions--that can guide this research field into the next decade. Adolescent sexuality has long been equated with risk and danger. However, throughout the presentations, it became clear that ASRH and related promotion efforts are broadening on different levels: from risk to well-being, from targeted and individual to comprehensive and structural, from knowledge transfer to innovative tools. However, indicators to measure adolescent sexuality that should accompany this broadening trend, are lacking. While public health related indicators (HIV/STIs, pregnancies) and their behavioral proxies (e.g., condom use, number of partners) are well developed and documented, there is a lack of consensus on indicators for the broader construct of adolescent sexuality, including sexual well-being and aspects of positive sexuality. Furthermore, the debate during the conference clearly indicated that experimental designs may not be the only appropriate study design to measure effectiveness of comprehensive, context-specific and long-term ASRH programmes, and that alternatives need to be identified and applied. Presenters at the conference clearly expressed the need to develop validated tools to measure different sub-constructs of adolescent sexuality and environmental factors. There was a plea to combine (quasi-)experimental effectiveness studies with evaluations of the development and implementation of ASRH promotion initiatives.

  1. Content and face validity of a comprehensive robotic skills training program for general surgery, urology, and gynecology.

    PubMed

    Dulan, Genevieve; Rege, Robert V; Hogg, Deborah C; Gilberg-Fisher, Kristine K; Tesfay, Seifu T; Scott, Daniel J

    2012-04-01

    The authors previously developed a comprehensive, proficiency-based robotic training curriculum that aimed to address 23 unique skills identified via task deconstruction of robotic operations. The purpose of this study was to determine the content and face validity of this curriculum. Expert robotic surgeons (n = 12) rated each deconstructed skill regarding relevance to robotic operations, were oriented to the curricular components, performed 3 to 5 repetitions on the 9 exercises, and rated each exercise. In terms of content validity, experts rated all 23 deconstructed skills as highly relevant (4.5 on a 5-point scale). Ratings for the 9 inanimate exercises indicated moderate to thorough measurement of designated skills. For face validity, experts indicated that each exercise effectively measured relevant skills (100% agreement) and was highly effective for training and assessment (4.5 on a 5-point scale). These data indicate that the 23 deconstructed skills accurately represent the appropriate content for robotic skills training and strongly support content and face validity for this curriculum. Copyright © 2012. Published by Elsevier Inc.

  2. Impact of a Technology-Mediated Reading Intervention on Adolescents' Reading Comprehension

    ERIC Educational Resources Information Center

    Fogarty, Melissa; Clemens, Nathan; Simmons, Deborah; Anderson, Leah; Davis, John; Smith, Ashley; Wang, Huan; Kwok, Oi-man; Simmons, Leslie E.; Oslund, Eric

    2017-01-01

    In this experimental study we examined the effects of a technology-mediated, multicomponent reading comprehension intervention, Comprehension Circuit Training (CCT), for middle school students, the majority of whom were struggling readers. The study was conducted in three schools, involving three teachers and 228 students. Using a within-teacher…

  3. Comprehensive overview of the Point-by-Point model of prompt emission in fission

    NASA Astrophysics Data System (ADS)

    Tudora, A.; Hambsch, F.-J.

    2017-08-01

    The investigation of prompt emission in fission is very important in understanding the fission process and to improve the quality of evaluated nuclear data required for new applications. In the last decade remarkable efforts were done for both the development of prompt emission models and the experimental investigation of the properties of fission fragments and the prompt neutrons and γ-ray emission. The accurate experimental data concerning the prompt neutron multiplicity as a function of fragment mass and total kinetic energy for 252Cf(SF) and 235 ( n, f) recently measured at JRC-Geel (as well as other various prompt emission data) allow a consistent and very detailed validation of the Point-by-Point (PbP) deterministic model of prompt emission. The PbP model results describe very well a large variety of experimental data starting from the multi-parametric matrices of prompt neutron multiplicity ν (A,TKE) and γ-ray energy E_{γ}(A,TKE) which validate the model itself, passing through different average prompt emission quantities as a function of A ( e.g., ν(A), E_{γ}(A), < ɛ > (A) etc.), as a function of TKE ( e.g., ν (TKE), E_{γ}(TKE)) up to the prompt neutron distribution P (ν) and the total average prompt neutron spectrum. The PbP model does not use free or adjustable parameters. To calculate the multi-parametric matrices it needs only data included in the reference input parameter library RIPL of IAEA. To provide average prompt emission quantities as a function of A, of TKE and total average quantities the multi-parametric matrices are averaged over reliable experimental fragment distributions. The PbP results are also in agreement with the results of the Monte Carlo prompt emission codes FIFRELIN, CGMF and FREYA. The good description of a large variety of experimental data proves the capability of the PbP model to be used in nuclear data evaluations and its reliability to predict prompt emission data for fissioning nuclei and incident energies for which the experimental information is completely missing. The PbP treatment can also provide input parameters of the improved Los Alamos model with non-equal residual temperature distributions recently reported by Madland and Kahler, especially for fissioning nuclei without any experimental information concerning the prompt emission.

  4. Adolescents' perception of parental feeding practices: Adaptation and validation of the Comprehensive Feeding Practices Questionnaire for Brazilian adolescents—The CFPQ-Teen

    PubMed Central

    Piccoli, Ângela Bein; Neiva-Silva, Lucas; Mosmann, Clarisse Pereira; Musher-Eizenman, Dara; Pellanda, Lucia C.

    2017-01-01

    Background Parental feeding practices may play a key role in dietary habits and nutritional status of adolescents, but research from adolescents’ point of view on this topic is scarce. Objective To adapt and validate an instrument of parental feeding practices as perceived by adolescents in a Brazilian setting. Methods The Comprehensive Feeding Practices Questionnaire was translated into Portuguese and adapted to be answered by adolescents (ages 12 to 18). Content analysis and FACE validity to assess cultural equivalence was undertaken by experts in the adolescent nutritional and psychological fields. Pilot study was evaluated in 23 adolescents. The final version was administered to 41 students to assess instrument reproducibility (Intraclass Correlation Coefficient). Internal consistency (Cronbach's Alpha) and construct validity (Confirmatory Factor Analysis) were assessed in a third sample of 307 adolescents. Results Experts and adolescents considered content validity as appropriate. In reproducibility analysis (Intraclass Correlation Coefficient), 10 of the 12 factors were above 0.7. The factors “teaching about nutrition” and “food as reward” obtained values of 0.60 and 0.68, respectively. The Cronbach's Alpha of the whole scale was 0.83 and alphas for subscales ranged from 0.52 to 0.85; the factors “teaching about nutrition” and “food as a reward” had the lowest values (0.52). After removing these two factors, the Confirmatory Factor Analysis indicated that the structural model was appropriate. The final scale was made up of 10 factors with 43 questions. Conclusions The Comprehensive Feeding Practices Questionnaire-Teen demonstrates validity and reliability, and is a suitable tool to evaluate the perceptions of adolescents regarding parental feeding practices. PMID:29145485

  5. Sentence Comprehension in Adolescents with down Syndrome and Typically Developing Children: Role of Sentence Voice, Visual Context, and Auditory-Verbal Short-Term Memory.

    ERIC Educational Resources Information Center

    Miolo, Giuliana; Chapman, Robins S.; Sindberg, Heidi A.

    2005-01-01

    The authors evaluated the roles of auditory-verbal short-term memory, visual short-term memory, and group membership in predicting language comprehension, as measured by an experimental sentence comprehension task (SCT) and the Test for Auditory Comprehension of Language--Third Edition (TACL-3; E. Carrow-Woolfolk, 1999) in 38 participants: 19 with…

  6. Reliability and validity of advanced theory-of-mind measures in middle childhood and adolescence.

    PubMed

    Hayward, Elizabeth O; Homer, Bruce D

    2017-09-01

    Although theory-of-mind (ToM) development is well documented for early childhood, there is increasing research investigating changes in ToM reasoning in middle childhood and adolescence. However, the psychometric properties of most advanced ToM measures for use with older children and adolescents have not been firmly established. We report on the reliability and validity of widely used, conventional measures of advanced ToM with this age group. Notable issues with both reliability and validity of several of the measures were evident in the findings. With regard to construct validity, results do not reveal a clear empirical commonality between tasks, and, after accounting for comprehension, developmental trends were evident in only one of the tasks investigated. Statement of contribution What is already known on this subject? Second-order false belief tasks have acceptable internal consistency. The Eyes Test has poor internal consistency. Validity of advanced theory-of-mind tasks is often based on the ability to distinguish clinical from typical groups. What does this study add? This study examines internal consistency across six widely used advanced theory-of-mind tasks. It investigates validity of tasks based on comprehension of items by typically developing individuals. It further assesses construct validity, or commonality between tasks. © 2017 The British Psychological Society.

  7. ICG: a wiki-driven knowledgebase of internal control genes for RT-qPCR normalization.

    PubMed

    Sang, Jian; Wang, Zhennan; Li, Man; Cao, Jiabao; Niu, Guangyi; Xia, Lin; Zou, Dong; Wang, Fan; Xu, Xingjian; Han, Xiaojiao; Fan, Jinqi; Yang, Ye; Zuo, Wanzhu; Zhang, Yang; Zhao, Wenming; Bao, Yiming; Xiao, Jingfa; Hu, Songnian; Hao, Lili; Zhang, Zhang

    2018-01-04

    Real-time quantitative PCR (RT-qPCR) has become a widely used method for accurate expression profiling of targeted mRNA and ncRNA. Selection of appropriate internal control genes for RT-qPCR normalization is an elementary prerequisite for reliable expression measurement. Here, we present ICG (http://icg.big.ac.cn), a wiki-driven knowledgebase for community curation of experimentally validated internal control genes as well as their associated experimental conditions. Unlike extant related databases that focus on qPCR primers in model organisms (mainly human and mouse), ICG features harnessing collective intelligence in community integration of internal control genes for a variety of species. Specifically, it integrates a comprehensive collection of more than 750 internal control genes for 73 animals, 115 plants, 12 fungi and 9 bacteria, and incorporates detailed information on recommended application scenarios corresponding to specific experimental conditions, which, collectively, are of great help for researchers to adopt appropriate internal control genes for their own experiments. Taken together, ICG serves as a publicly editable and open-content encyclopaedia of internal control genes and accordingly bears broad utility for reliable RT-qPCR normalization and gene expression characterization in both model and non-model organisms. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Validation of an instrument to measure inter-organisational linkages in general practice.

    PubMed

    Amoroso, Cheryl; Proudfoot, Judith; Bubner, Tanya; Jayasinghe, Upali W; Holton, Christine; Winstanley, Julie; Beilby, Justin; Harris, Mark F

    2007-12-03

    Linkages between general medical practices and external services are important for high quality chronic disease care. The purpose of this research is to describe the development, evaluation and use of a brief tool that measures the comprehensiveness and quality of a general practice's linkages with external providers for the management of patients with chronic disease. In this study, clinical linkages are defined as the communication, support, and referral arrangements between services for the care and assistance of patients with chronic disease. An interview to measure surgery-level (rather than individual clinician-level) clinical linkages was developed, piloted, reviewed, and evaluated with 97 Australian general practices. Two validated survey instruments were posted to patients, and a survey of locally available services was developed and posted to participating Divisions of General Practice (support organisations). Hypotheses regarding internal validity, association with local services, and patient satisfaction were tested using factor analysis, logistic regression and multilevel regression models. The resulting General Practice Clinical Linkages Interview (GP-CLI) is a nine-item tool with three underlying factors: referral and advice linkages, shared care and care planning linkages, and community access and awareness linkages. Local availability of chronic disease services has no affect on the comprehensiveness of services with which practices link, however, comprehensiveness of clinical linkages has an association with patient assessment of access, receptionist services, and of continuity of care in their general practice. The GP-CLI may be useful to researchers examining comparable health care systems for measuring the comprehensiveness and quality of linkages at a general practice-level with related services, possessing both internal and external validity. The tool can be used with large samples exploring the impact, outcomes, and facilitators of high quality clinical linkages in general practice.

  9. Operational Consequences of Literacy Gap.

    DTIC Science & Technology

    1980-05-01

    Comprehension Scores on the Safety and Sanitation Content 37 11. Statistics on Experimental Groups’ Performance by Sex and Content 37 12. Analysis of...Variance of Experimental Groups by Sex and Content 38 13. Mean Comprehension Scores Broken Down by Content, Subject RGL and Reading Time 39 14. Analysis...ratings along a scale of difficulty which parallels the school grade scale. Burkett (1975) and Klare (1963; 1974-1975) provide summaries of the extensive

  10. Application of the Modified Erikson Psychosocial Stage Inventory: 25 Years in Review.

    PubMed

    Darling-Fisher, Cynthia S

    2018-04-01

    The Modified Erikson Psychosocial Stage Inventory (MEPSI) is an 80-item, comprehensive measure of psychosocial development based on Erikson's theory with published reliability and validity data. Although designed as a comprehensive measure, some researchers have used individual subscales for specific developmental stages as a measure; however, these subscale reliability scores have not been generally shared. This article reviewed the literature to evaluate the use of the MEPSI: the major research questions, samples/populations studied, and individual subscale and total reliability and validity data. In total, 16 research articles (1990-2011) and 28 Dissertations/Theses (1991-2016) from nursing, social work, psychology, criminal justice, and religious studies met criteria. Results support the MEPSI's global reliability (aggregate scores ranged .89-.99) and validity in terms of consistent patterns of changes observed in the predicted direction. Reliability and validity data for individual subscales were more variable. Limitations of the tool and recommendations for possible revision and future research are addressed.

  11. Threats to the Internal Validity of Experimental and Quasi-Experimental Research in Healthcare.

    PubMed

    Flannelly, Kevin J; Flannelly, Laura T; Jankowski, Katherine R B

    2018-01-01

    The article defines, describes, and discusses the seven threats to the internal validity of experiments discussed by Donald T. Campbell in his classic 1957 article: history, maturation, testing, instrument decay, statistical regression, selection, and mortality. These concepts are said to be threats to the internal validity of experiments because they pose alternate explanations for the apparent causal relationship between the independent variable and dependent variable of an experiment if they are not adequately controlled. A series of simple diagrams illustrate three pre-experimental designs and three true experimental designs discussed by Campbell in 1957 and several quasi-experimental designs described in his book written with Julian C. Stanley in 1966. The current article explains why each design controls for or fails to control for these seven threats to internal validity.

  12. [Effects of a coaching program on comprehensive lifestyle modification for women with gestational diabetes mellitus].

    PubMed

    Ko, Jung Mi; Lee, Jong Kyung

    2014-12-01

    The purpose of this study was to investigate the effects of using a Coaching Program on Comprehensive Lifestyle Modification with pregnant women who have gestational diabetes. The research design for this study was a non-equivalent control group quasi-experimental study. Pregnant women with gestational diabetes were recruited from D women's hospital located in Gyeonggi Province from April to October, 2013. Participants in this study were 34 for the control group and 34 for the experimental group. The experimental group participated in the Coaching Program on Comprehensive Lifestyle Modification. The program consisted of education, small group coaching and telephone coaching over 4weeks. Statistical analysis was performed using the SPSS 21.0 program. There were significant improvements in self-care behavior, and decreases in depression, fasting blood sugar and HbA1C in the experimental group compared to the control group. However, no significant differences were found between the two groups for knowledge of gestational diabetes mellitus. The Coaching Program on Comprehensive Lifestyle Modification used in this study was found to be effective in improving self-care behavior and reducing depression, fasting blood sugar and HbA1C, and is recommended for use in clinical practice as an effective nursing intervention for pregnant women with gestational diabetes.

  13. Mechanisms underlying syntactic comprehension deficits in vascular aphasia: new evidence from self-paced listening.

    PubMed

    Caplan, David; Michaud, Jennifer; Hufford, Rebecca

    2015-01-01

    Sixty-one people with aphasia (pwa) and 41 matched controls were tested for the ability to understand sentences that required the ability to process particular syntactic elements and assign particular syntactic structures. Participants paced themselves word-by-word through 20 examples of 11 spoken sentence types and indicated which of two pictures corresponded to the meaning of each sentence. Sentences were developed in pairs such that comprehension of the experimental version of a pair required an aspect of syntactic processing not required in the corresponding baseline sentence. The need for the syntactic operations required only in the experimental version was triggered at a "critical word" in the experimental sentence. Listening times for critical words in experimental sentences were compared to those for corresponding words in the corresponding baseline sentences. The results were consistent with several models of syntactic comprehension deficits in pwa: resource reduction, slowed lexical and/or syntactic processing, abnormal susceptibility to interference from thematic roles generated non-syntactically. They suggest that a previously unidentified disturbance limiting the duration of parsing and interpretation may lead to these deficits, and that this mechanism may lead to structure-specific deficits in pwa. The results thus point to more than one mechanism underlying syntactic comprehension disorders both across and within pwa.

  14. Linguistic validation of translation of the self-assessment goal achievement (saga) questionnaire from English

    PubMed Central

    2012-01-01

    Background A linguistic validation of the Self-Assessment Goal Achievement (SAGA) questionnaire was conducted for 12 European languages, documenting that each translation adequately captures the concepts of the original English-language version of the questionnaire and is readily understood by subjects in the target population. Methods Native-speaking residents of the target countries who reported urinary problems/lower urinary tract problems were asked to review a translation of the SAGA questionnaire, which was harmonized among 12 languages: Danish, Dutch, English (UK), Finnish, French, German, Greek, Icelandic, Italian, Norwegian, Spanish, and Swedish. During a cognitive debriefing interview, participants were asked to identify any words that were difficult to understand and explain in their own words the meaning of each sentence in the questionnaire. The qualitative analysis was conducted by local linguistic validation teams (original translators, back translator, project manager, interviewer, and survey research expert). Results Translations of the SAGA questionnaire from English to 12 European languages were well understood by the participants with an overall comprehension rate across language of 98.9%. In addition, the translations retained the original meaning of the SAGA items and instructions. Comprehension difficulties were identified, and after review by the translation team, minor changes were made to 7 of the 12 translations to improve clarity and comprehension. Conclusions Conceptual, semantic, and cultural equivalence of each translation of the SAGA questionnaire was achieved thus confirming linguistic validation. PMID:22525050

  15. Linguistic validation of translation of the Self-Assessment Goal Achievement (SAGA) questionnaire from English.

    PubMed

    Piault, Elisabeth; Doshi, Sameepa; Brandt, Barbara A; Angün, Çolpan; Evans, Christopher J; Bergqvist, Agneta; Trocio, Jeffrey

    2012-04-23

    A linguistic validation of the Self-Assessment Goal Achievement (SAGA) questionnaire was conducted for 12 European languages, documenting that each translation adequately captures the concepts of the original English-language version of the questionnaire and is readily understood by subjects in the target population. Native-speaking residents of the target countries who reported urinary problems/lower urinary tract problems were asked to review a translation of the SAGA questionnaire, which was harmonized among 12 languages: Danish, Dutch, English (UK), Finnish, French, German, Greek, Icelandic, Italian, Norwegian, Spanish, and Swedish. During a cognitive debriefing interview, participants were asked to identify any words that were difficult to understand and explain in their own words the meaning of each sentence in the questionnaire. The qualitative analysis was conducted by local linguistic validation teams (original translators, back translator, project manager, interviewer, and survey research expert). Translations of the SAGA questionnaire from English to 12 European languages were well understood by the participants with an overall comprehension rate across language of 98.9%. In addition, the translations retained the original meaning of the SAGA items and instructions. Comprehension difficulties were identified, and after review by the translation team, minor changes were made to 7 of the 12 translations to improve clarity and comprehension. Conceptual, semantic, and cultural equivalence of each translation of the SAGA questionnaire was achieved thus confirming linguistic validation.

  16. Content Validity of the Hypogonadism Impact of Symptoms Questionnaire (HIS-Q): A Patient-Reported Outcome Measure to Evaluate Symptoms of Hypogonadism.

    PubMed

    Gelhorn, Heather L; Vernon, Margaret K; Stewart, Katie D; Miller, Michael G; Brod, Meryl; Althof, Stanley E; DeRogatis, Leonard R; Dobs, Adrian; Seftel, Allen D; Revicki, Dennis A

    2016-04-01

    Hypogonadism, or low testosterone, is a common disorder. There are currently no patient-reported outcome (PRO) instruments designed to comprehensively evaluate the symptoms of hypogonadism and to detect changes in these symptoms in response to treatment. The purpose of this study was to develop a PRO instrument, the Hypogonadism Impact of Symptoms Questionnaire (HIS-Q) and to assess its content validity. A literature review, expert clinician input, and qualitative concept elicitation with 39 male hypogonadism patients (four focus groups: n = 25; individual interviews: n = 14; mean age 52.3 ± 14.3 years) from the USA were used to develop the draft HIS-Q. Subsequent cognitive interviews (n = 29; mean age 51.5 ± 15.4 years) were used to evaluate content validity. Emergent discussion with participants yielded symptoms within the sexual, physical, energy, sleep, cognition, and mood domains. Low libido and tiredness were most commonly reported. The initial version of the HIS-Q includes 53 items that were consistently understood by the participants, who found the instrument to be relevant to their experiences with hypogonadism and comprehensive in the content coverage of symptoms. The HIS-Q is a comprehensive PRO measure of hypogonadism symptom severity in males. Its design elements, including the response options and recall period, were suitable, and content validity was confirmed.

  17. Separate the Sheep from the Goats: Use and Limitations of Large Animal Models in Intervertebral Disc Research.

    PubMed

    Reitmaier, Sandra; Graichen, Friedmar; Shirazi-Adl, Aboulfazl; Schmidt, Hendrik

    2017-10-04

    Approximately 5,168 large animals (pigs, sheep, goats, and cattle) were used for intervertebral disc research in identified studies published between 1985 and 2016. Most of the reviewed studies revealed a low scientific impact, a lack of sound justifications for the animal models, and a number of deficiencies in the documentation of the animal experimentation. The scientific community should take suitable measures to investigate the presumption that animal models have translational value in intervertebral disc research. Recommendations for future investigations are provided to improve the quality, validity, and usefulness of animal studies for intervertebral disc research. More in vivo studies are warranted to comprehensively evaluate the suitability of animal models in various applications and help place animal models as an integral, complementary part of intervertebral disc research.

  18. An information theory account of cognitive control

    PubMed Central

    Fan, Jin

    2014-01-01

    Our ability to efficiently process information and generate appropriate responses depends on the processes collectively called cognitive control. Despite a considerable focus in the literature on the cognitive control of information processing, neural mechanisms underlying control are still unclear, and have not been characterized by considering the quantity of information to be processed. A novel and comprehensive account of cognitive control is proposed using concepts from information theory, which is concerned with communication system analysis and the quantification of information. This account treats the brain as an information-processing entity where cognitive control and its underlying brain networks play a pivotal role in dealing with conditions of uncertainty. This hypothesis and theory article justifies the validity and properties of such an account and relates experimental findings to the frontoparietal network under the framework of information theory. PMID:25228875

  19. Lamb wave-based damage quantification and probability of detection modeling for fatigue life assessment of riveted lap joint

    NASA Astrophysics Data System (ADS)

    He, Jingjing; Wang, Dengjiang; Zhang, Weifang

    2015-03-01

    This study presents an experimental and modeling study for damage detection and quantification in riveted lap joints. Embedded lead zirconate titanate piezoelectric (PZT) ceramic wafer-type sensors are employed to perform in-situ non-destructive testing during fatigue cyclical loading. A multi-feature integration method is developed to quantify the crack size using signal features of correlation coefficient, amplitude change, and phase change. In addition, probability of detection (POD) model is constructed to quantify the reliability of the developed sizing method. Using the developed crack size quantification method and the resulting POD curve, probabilistic fatigue life prediction can be performed to provide comprehensive information for decision-making. The effectiveness of the overall methodology is demonstrated and validated using several aircraft lap joint specimens from different manufactures and under different loading conditions.

  20. A methodological analysis of chaplaincy research: 2000-2009.

    PubMed

    Galek, Kathleen; Flannelly, Kevin J; Jankowski, Katherine R B; Handzo, George F

    2011-01-01

    The present article presents a comprehensive review and analysis of quantitative research conducted in the United States on chaplaincy and closely related topics published between 2000 and 2009. A combined search strategy identified 49 quantitative studies in 13 journals. The analysis focuses on the methodological sophistication of the studies, compared to earlier research on chaplaincy and pastoral care. Cross-sectional surveys of convenience samples still dominate the field, but sample sizes have increased somewhat over the past three decades. Reporting of the validity and reliability of measures continues to be low, although reporting of response rates has improved. Improvements in the use of inferential statistics and statistical controls were also observed, compared to previous research. The authors conclude that more experimental research is needed on chaplaincy, along with an increased use of hypothesis testing, regardless of the research designs that are used.

  1. Experimental characterization of the weld pool flow in a TIG configuration

    NASA Astrophysics Data System (ADS)

    Stadler, M.; Masquère, M.; Freton, P.; Franceries, X.; Gonzalez, J. J.

    2014-11-01

    Tungsten Inert Gas (TIG) welding process relies on heat transfer between plasma and work piece leading to a metallic weld pool. Combination of different forces produces movements on the molten pool surface. One of our aims is to determine the velocity on the weld pool surface. This provides a set of data that leads to a deeper comprehension of the flow behavior and allows us to validate numerical models used to study TIG parameters. In this paper, two diagnostic methods developed with high speed imaging for the determination of velocity of an AISI 304L stainless steel molten pool are presented. Application of the two methods to a metallic weld pool under helium with a current intensity of 100 A provides velocity values around 0.70 m/s which are in good agreement with literature works.

  2. Method Improving Reading Comprehension In Primary Education Program Students

    NASA Astrophysics Data System (ADS)

    Rohana

    2018-01-01

    This study aims to determine the influence of reading comprehension skills of English for PGSD students through the application of SQ3R learning method. The type of this research is Pre-Experimental research because it is not yet a real experiment, there are external variables that influence the formation of a dependent variable, this is because there is no control variable and the sample is not chosen randomly. The research design is used is one-group pretest-post-test design involving one group that is an experimental group. In this design, the observation is done twice before and after the experiment. Observations made before the experiment (O1) are called pretests and the post-experimental observation (O2) is called posttest. The difference between O1 and O2 ie O2 - O1 is the effect of the treatment. The results showed that there was an improvement in reading comprehension skills of PGSD students in Class M.4.3 using SQ3R method, and better SQ3R enabling SQ3R to improve English comprehension skills.

  3. Incidence of Verbal Comparisons in Beginners' Books and in Metaphor Comprehension Research: A Search for Ecological Validity.

    ERIC Educational Resources Information Center

    Broderick, Victor

    1992-01-01

    Classifies explicit verbal comparisons in 53 popular children's books both syntactically and semantically. Comparison types found in these books were contrasted with comparisons used as comprehension stimuli in extant developmental research. Implications for the design of future stimulus sets are discussed. (17 references) (GLR)

  4. A Comprehensive Inclusion Program for Kindergarten Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Sainato, Diane M.; Morrison, Rebecca S.; Jung, Sunhwa; Axe, Judah; Nixon, Patricia A.

    2015-01-01

    To date, reports of empirically validated comprehensive intervention programs for children with autism spectrum disorder (ASD) have been limited to preschool-age children. We examined the effects of a model inclusive kindergarten program for children with ASD. Forty-one children received instruction in an inclusive kindergarten program with their…

  5. Using Listener Judgments to Investigate Linguistic Influences on L2 Comprehensibility and Accentedness: A Validation and Generalization Study

    ERIC Educational Resources Information Center

    Saito, Kazuya; Trofimovich, Pavel; Isaacs, Talia

    2017-01-01

    The current study investigated linguistic influences on comprehensibility (ease of understanding) and accentedness (linguistic nativelikeness) in second language (L2) learners' extemporaneous speech. Target materials included picture narratives from 40 native French speakers of English from different proficiency levels. The narratives were…

  6. Comprehensive Comparison of Self-Administered Questionnaires for Measuring Quantitative Autistic Traits in Adults

    ERIC Educational Resources Information Center

    Nishiyama, Takeshi; Suzuki, Masako; Adachi, Katsunori; Sumi, Satoshi; Okada, Kensuke; Kishino, Hirohisa; Sakai, Saeko; Kamio, Yoko; Kojima, Masayo; Suzuki, Sadao; Kanne, Stephen M.

    2014-01-01

    We comprehensively compared all available questionnaires for measuring quantitative autistic traits (QATs) in terms of reliability and construct validity in 3,147 non-clinical and 60 clinical subjects with normal intelligence. We examined four full-length forms, the Subthreshold Autism Trait Questionnaire (SATQ), the Broader Autism Phenotype…

  7. Self Validation: Putting the Pieces Together. [Leader's Handbook].

    ERIC Educational Resources Information Center

    Beiman, Abbie; And Others

    This document is one in a set of eight staff development training manuals developed to facilitate the efforts of educators in the planning and implementation of comprehensive career guidance programs on the secondary level (7-12). This series is based on the goals and developmental objectives identified by the Georgia Comprehensive Career Guidance…

  8. The Effects of Surface Structure Variables on Performance in Reading Comprehension Tests.

    ERIC Educational Resources Information Center

    Drum, Priscilla; And Others

    1981-01-01

    Concludes that reading comprehension tests that are valid for beginning readers should incorporate different factors than tests appropriate for upper elementary readers, since word recognition and word meaning are prime sources of difficulty for younger readers while content density depresses the performance of readers in upper elementary grades.…

  9. Bringing Science to Bear: An Empirical Assessment of the Comprehensive Soldier Fitness Program

    ERIC Educational Resources Information Center

    Lester, Paul B.; McBride, Sharon; Bliese, Paul D.; Adler, Amy B.

    2011-01-01

    This article outlines the U.S. Army's effort to empirically validate and assess the Comprehensive Soldier Fitness (CSF) program. The empirical assessment includes four major components. First, the CSF scientific staff is currently conducting a longitudinal study to determine if the Master Resilience Training program and the Comprehensive…

  10. Cross-Language Priming of Word Meaning during Second Language Sentence Comprehension

    ERIC Educational Resources Information Center

    Yuan, Yanli; Woltz, Dan; Zheng, Robert

    2010-01-01

    The experiment investigated the benefit to second language (L2) sentence comprehension of priming word meanings with brief visual exposure to first language (L1) translation equivalents. Native English speakers learning Mandarin evaluated the validity of aurally presented Mandarin sentences. For selected words in half of the sentences there was…

  11. Recommendations to guide revision of the Guides to the Evaluation of Permanent Impairment. American Medical Association.

    PubMed

    Spieler, E A; Barth, P S; Burton, J F; Himmelstein, J; Rudolph, L

    2000-01-26

    The American Medical Association's Guides to the Evaluation of Permanent Impairment, Fourth Edition, is the most commonly used tool in the United States for rating permanent impairments for disability systems. The Guides, currently undergoing revision, has been the focus of considerable controversy. Criticisms have focused on 2 areas: internal deficiencies, including the lack of a comprehensive, valid, reliable, unbiased, and evidence-based system for rating impairments; and the way in which workers' compensation systems use the ratings, resulting in inappropriate compensation. We focus on the internal deficiencies and recommend that the Guides remains a tool for evaluation of permanent impairment, not disability. To maintain wide acceptance of the Guides, its authors need to improve the validity, internal consistency, and comprehensiveness of the ratings; document reliability and reproducibility of the results; and make the Guides easily comprehensible and accessible to physicians.

  12. A new technique for measuring listening and reading literacy in developing countries

    NASA Astrophysics Data System (ADS)

    Greene, Barbara A.; Royer, James M.; Anzalone, Stephen

    1990-03-01

    One problem in evaluating educational interventions in developing countries is the absence of tests that adequately reflect the culture and curriculum. The Sentence Verification Technique is a new procedure for measuring reading and listening comprehension that allows for the development of tests based on materials indigenous to a given culture. The validity of using the Sentence Verification Technique to measure reading comprehension in Grenada was evaluated in the present study. The study involved 786 students at standards 3, 4 and 5. The tests for each standard consisted of passages that varied in difficulty. The students identified as high ability students in all three standards performed better than those identified as low ability. All students performed better with easier passages. Additionally, students in higher standards performed bettter than students in lower standards on a given passage. These results supported the claim that the Sentence Verification Technique is a valid measure of reading comprehension in Grenada.

  13. Improving cultural diversity awareness of physical therapy educators.

    PubMed

    Lazaro, Rolando T; Umphred, Darcy A

    2007-01-01

    In a climate of increasing diversity in the population of patients requiring physical therapy (PT) services, PT educators must prepare students and future clinicians to work competently in culturally diverse environments. To be able to achieve this goal, PT educators must be culturally competent as well. The purposes of the study were to develop a valid and reliable instrument to assess cultural diversity awareness and to develop an educational workshop to improve cultural diversity awareness of PT academic and clinical educators. Phase 1 of the study involved the development of an instrument to assess cultural diversity awareness. The Cultural Diversity Awareness Questionnaire (CDAQ) was developed, validated for content, analyzed for reliability, and field and pilot tested. Results indicated that the CDAQ has favorable psychometric properties. Phase 2 of the study involved the development and implementation of the Cultural Diversity Workshop (CDW). The seminar contents and class materials were developed, validated, and implemented as a one-day cultural diversity awareness seminar. A one-group, pretest-posttest experimental design was used, with participants who completed the CDAQ before and after the workshop. Results indicated that the workshop was effective in improving cultural diversity awareness of the participants. Results of the workshop evaluation affirmed the achievement of objectives and effectiveness of the facilitator. This study provided a solid initial foundation upon which a comprehensive cultural competence program can be developed.

  14. External Validation of Bifactor Model of ADHD: Explaining Heterogeneity in Psychiatric Comorbidity, Cognitive Control, and Personality Trait Profiles within DSM-IV ADHD

    ERIC Educational Resources Information Center

    Martel, Michelle M.; Roberts, Bethan; Gremillion, Monica; von Eye, Alexander; Nigg, Joel T.

    2011-01-01

    The current paper provides external validation of the bifactor model of ADHD by examining associations between ADHD latent factor/profile scores and external validation indices. 548 children (321 boys; 302 with ADHD), 6 to 18 years old, recruited from the community participated in a comprehensive diagnostic procedure. Mothers completed the Child…

  15. EFL Learners' Reading Comprehension Development through MALL: Telegram Groups in Focus

    ERIC Educational Resources Information Center

    Naderi, Naderi; Akrami, Azam

    2018-01-01

    The study aimed at investigating the effect of instruction through telegram groups on the learners' reading comprehension. Moreover, it investigated that whether there is any difference between two genders of the experimental groups in reading comprehension ability. For this purpose, 147 subjects were selected. To homogenize them, a standard…

  16. Students' Reading Comprehension Performance with Emotional Literacy-Based Strategy Intervention

    ERIC Educational Resources Information Center

    Yussof, Yusfarina Mohd; Jamian, Abdul Rasid; Hamzah, Zaitul Azma Zainon; Roslan, Samsilah

    2013-01-01

    An effective reading comprehension process demands a strategy to enhance the cognitive ability to digest text information in the effort to elicit meaning contextually. In addition, the role of emotions also influences the efficacy of this process, especially in narrative text comprehension. This quasi-experimental study aims to observe students'…

  17. The Effects of Self-Questioning on Reading Comprehension: A Literature Review

    ERIC Educational Resources Information Center

    Joseph, Laurice M.; Alber-Morgan, Sheila; Cullen, Jennifer; Rouse, Christina

    2016-01-01

    The ability to monitor one's own reading comprehension is a critical skill for deriving meaning from text. Self-questioning during reading is a strategy that enables students to monitor their reading comprehension and increases their ability to learn independently. The purpose of this article was to review experimental research studies that…

  18. Classroom Simulation to Prepare Teachers to Use Evidence-Based Comprehension Practices

    ERIC Educational Resources Information Center

    Ely, Emily; Alves, Kat D.; Dolenc, Nathan R.; Sebolt, Stephanie; Walton, Emily A.

    2018-01-01

    Reading comprehension is an area of weakness for many students, including those with disabilities. Innovative technology methods may play a role in improving teacher readiness to use evidence-based comprehension practices for all students. In this experimental study, researchers examined a classroom simulation (TLE TeachLivE™) to improve…

  19. Validation of a Comprehensive Early Childhood Allergy Questionnaire.

    PubMed

    Minasyan, Anna; Babajanyan, Arman; Campbell, Dianne E; Nanan, Ralph

    2015-09-01

    Parental questionnaires to assess incidence of pediatric allergic disease have been validated for use in school-aged children. Currently, there is no validated questionnaire-based assessment of food allergy, atopic dermatitis (AD), and asthma for infants and young children. The Comprehensive Early Childhood Allergy Questionnaire was designed for detecting AD, asthma, and IgE-mediated food allergies in children aged 1-5 years. A nested case-control design was applied. Parents of 150 children attending pediatric outpatient clinics completed the questionnaire before being clinically assessed by a pediatrician for allergies. Sensitivity, specificity, and reproducibility of the questionnaire were assessed. Seventy-seven children were diagnosed with one or more current allergic diseases. The questionnaire demonstrated high overall sensitivity of 0.93 (95% CI 0.86-0.98) with a specificity of 0.79 (95% CI 0.68-0.88). Questionnaire reproducibility was good with a kappa agreement rate for symptom-related questions of 0.45-0.90. Comprehensive Early Childhood Allergy Questionnaire accurately and reliably reflects the presence of allergies in children aged 1-5 years. Its use is warranted as a tool for determining prevalence of allergies in this pediatric age group. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. The use of functional chemical-protein associations to identify multi-pathway renoprotectants.

    PubMed

    Xu, Jia; Meng, Kexin; Zhang, Rui; Yang, He; Liao, Chang; Zhu, Wenliang; Jiao, Jundong

    2014-01-01

    Typically, most nephropathies can be categorized as complex human diseases in which the cumulative effect of multiple minor genes, combined with environmental and lifestyle factors, determines the disease phenotype. Thus, multi-target drugs would be more likely to facilitate comprehensive renoprotection than single-target agents. In this study, functional chemical-protein association analysis was performed to retrieve multi-target drugs of high pathway wideness from the STITCH 3.1 database. Pathway wideness of a drug evaluated the efficiency of regulation of Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways in quantity. We identified nine experimentally validated renoprotectants that exerted remarkable impact on KEGG pathways by targeting a limited number of proteins. We selected curcumin as an illustrative compound to display the advantage of multi-pathway drugs on renoprotection. We compared curcumin with hemin, an agonist of heme oxygenase-1 (HO-1), which significantly affects only one KEGG pathway, porphyrin and chlorophyll metabolism (adjusted p = 1.5×10-5). At the same concentration (10 µM), both curcumin and hemin equivalently mitigated oxidative stress in H2O2-treated glomerular mesangial cells. The benefit of using hemin was derived from its agonistic effect on HO-1, providing relief from oxidative stress. Selective inhibition of HO-1 completely blocked the action of hemin but not that of curcumin, suggesting simultaneous multi-pathway intervention by curcumin. Curcumin also increased cellular autophagy levels, enhancing its protective effect; however, hemin had no effects. Based on the fact that the dysregulation of multiple pathways is implicated in the etiology of complex diseases, we proposed a feasible method for identifying multi-pathway drugs from compounds with validated targets. Our efforts will help identify multi-pathway agents capable of providing comprehensive protection against renal injuries.

  1. Comprehension of Japanese oral care-related terms among caregivers and nurses, as assessed using a newly developed instrument.

    PubMed

    Shibata, Satoko; Stegaroiu, Roxana; Nakazawa, Akari; Ohuchi, Akitsugu

    2017-03-01

    (i) To assess comprehension of oral care-related terms among caregivers and nurses working at long-term care facilities, using a newly developed test; (ii) to analyse the effect of participant characteristics on their comprehension. Effective mutual communication between dental professionals and caregivers/nurses is essential for providing information on daily oral care for institutionalised elders. A 36-item word-knowledge test in Japanese was developed to assess comprehension of oral care-related terms. The test was administered to a convenience sample of 236 nursing staff (198 caregivers and 38 nurses) at six long-term care facilities in Niigata City, Japan, and its reliability and validity were verified. Associations of participant characteristics with their responses were investigated by multiple regression analysis. Mean percentage of correct responses (accuracy rate) for nursing staff was approximately 62% (highest for oral care products and lowest for prosthodontic terms). Test internal reliability was high (Cronbach's alpha >0.8). Concurrent validity (test ability to distinguish between characteristically different groups) was confirmed. Mean accuracy rate was significantly higher among nurses (78.5 ± 19.3%) than among caregivers (58.7 ± 22.8%), and among respondents with interest in oral care (64.2 ± 21.1%) than among those with no such interest (51.5 ± 28.9%). The word-knowledge test was valid and reliable for nursing staff of six long-term care facilities in Niigata City. Their comprehension was low for perioral and intraoral structures, related symptom and disease names, and prosthodontics terms related to oral care. Understanding of oral care-related terms among the nursing staff was related to their occupation and interest in oral care. © 2016 John Wiley & Sons A/S and The Gerodontology Association. Published by John Wiley & Sons Ltd.

  2. The Human Urine Metabolome

    PubMed Central

    Bouatra, Souhaila; Aziat, Farid; Mandal, Rupasri; Guo, An Chi; Wilson, Michael R.; Knox, Craig; Bjorndahl, Trent C.; Krishnamurthy, Ramanarayan; Saleem, Fozia; Liu, Philip; Dame, Zerihun T.; Poelzer, Jenna; Huynh, Jessica; Yallou, Faizath S.; Psychogios, Nick; Dong, Edison; Bogumil, Ralf; Roehring, Cornelia; Wishart, David S.

    2013-01-01

    Urine has long been a “favored” biofluid among metabolomics researchers. It is sterile, easy-to-obtain in large volumes, largely free from interfering proteins or lipids and chemically complex. However, this chemical complexity has also made urine a particularly difficult substrate to fully understand. As a biological waste material, urine typically contains metabolic breakdown products from a wide range of foods, drinks, drugs, environmental contaminants, endogenous waste metabolites and bacterial by-products. Many of these compounds are poorly characterized and poorly understood. In an effort to improve our understanding of this biofluid we have undertaken a comprehensive, quantitative, metabolome-wide characterization of human urine. This involved both computer-aided literature mining and comprehensive, quantitative experimental assessment/validation. The experimental portion employed NMR spectroscopy, gas chromatography mass spectrometry (GC-MS), direct flow injection mass spectrometry (DFI/LC-MS/MS), inductively coupled plasma mass spectrometry (ICP-MS) and high performance liquid chromatography (HPLC) experiments performed on multiple human urine samples. This multi-platform metabolomic analysis allowed us to identify 445 and quantify 378 unique urine metabolites or metabolite species. The different analytical platforms were able to identify (quantify) a total of: 209 (209) by NMR, 179 (85) by GC-MS, 127 (127) by DFI/LC-MS/MS, 40 (40) by ICP-MS and 10 (10) by HPLC. Our use of multiple metabolomics platforms and technologies allowed us to identify several previously unknown urine metabolites and to substantially enhance the level of metabolome coverage. It also allowed us to critically assess the relative strengths and weaknesses of different platforms or technologies. The literature review led to the identification and annotation of another 2206 urinary compounds and was used to help guide the subsequent experimental studies. An online database containing the complete set of 2651 confirmed human urine metabolite species, their structures (3079 in total), concentrations, related literature references and links to their known disease associations are freely available at http://www.urinemetabolome.ca. PMID:24023812

  3. Experimental Characterization and Validation of Simultaneous Gust Alleviation and Energy Harvesting for Multifunctional Wing Spars

    DTIC Science & Technology

    2012-08-01

    U0=15m/s,  Lv  =350m   Cloud Wind and Clear Sky Gust Simulation Using Dryden PSD* Harvested Energy from Normal Vibration (Red) to...energy control law based on limited energy constraints 4) Experimentally validated simultaneous energy harvesting and vibration control Summary...Experimental Characterization and Validation of Simultaneous Gust Alleviation and Energy Harvesting for Multifunctional Wing Spars AFOSR

  4. Validation of Malayalam Version of National Comprehensive Cancer Network Distress Thermometer and its Feasibility in Oncology Patients.

    PubMed

    Biji, M S; Dessai, Sampada; Sindhu, N; Aravind, Sithara; Satheesan, B

    2018-01-01

    This study was designed to translate and validate the National Comprehensive Cancer Network (NCCN) distress thermometer (DT) in regional language " Malayalam" and to see the feasibility of using it in our patients. (1) To translate and validate the NCCN DT. (2) To study the feasibility of using validated Malayalam translated DT in Malabar Cancer center. This is a single-arm prospective observational study. The study was conducted at author's institution between December 8, 2015, and January 20, 2016 in the Department of Cancer Palliative Medicine. This was a prospective observational study carried out in two phases. In Phase 1, the linguistic validation of the NCCN DT was done. In Phase 2, the feasibility, face validity, and utility of the translated of NCCN DT in accordance with QQ-10 too was done. SPSS version 16 (SPSS Inc. Released 2007. SPSS for Windows, Version 16.0. Chicago, SPSS Inc.) was used for analysis. Ten patients were enrolled in Phase 2. The median age was 51.5 years and 40% of patients were male. All patients had completed at least basic education up to the primary level. The primary site of cancer was heterogeneous. The NCCN DT completion rate was 100%. The face validity, utility, reliability, and feasibility were 100%, 100%, 100%, and 90%, respectively. It can be concluded that the Malayalam validated DT has high face validity, utility, and it is feasible for its use.

  5. Comprehension and Motivation Levels in Conjunction with the Use of eBooks with Audio: A Quasi-Experimental Study of Post-Secondary Remedial Reading Students

    ERIC Educational Resources Information Center

    Wheeler, Kimberly W.

    2014-01-01

    This quasi-experimental pretest, posttest nonequivalent control group study investigated the comprehension scores and motivation levels of post-secondary remedial reading students in a two-year technical college in Northwest Georgia using an eBook, an eBook with audio, and a print book. After reading a module on Purpose and Tone in the three book…

  6. Review and assessment of turbulence models for hypersonic flows

    NASA Astrophysics Data System (ADS)

    Roy, Christopher J.; Blottner, Frederick G.

    2006-10-01

    Accurate aerodynamic prediction is critical for the design and optimization of hypersonic vehicles. Turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating for these systems. The first goal of this article is to update the previous comprehensive review of hypersonic shock/turbulent boundary-layer interaction experiments published in 1991 by Settles and Dodson (Hypersonic shock/boundary-layer interaction database. NASA CR 177577, 1991). In their review, Settles and Dodson developed a methodology for assessing experiments appropriate for turbulence model validation and critically surveyed the existing hypersonic experiments. We limit the scope of our current effort by considering only two-dimensional (2D)/axisymmetric flows in the hypersonic flow regime where calorically perfect gas models are appropriate. We extend the prior database of recommended hypersonic experiments (on four 2D and two 3D shock-interaction geometries) by adding three new geometries. The first two geometries, the flat plate/cylinder and the sharp cone, are canonical, zero-pressure gradient flows which are amenable to theory-based correlations, and these correlations are discussed in detail. The third geometry added is the 2D shock impinging on a turbulent flat plate boundary layer. The current 2D hypersonic database for shock-interaction flows thus consists of nine experiments on five different geometries. The second goal of this study is to review and assess the validation usage of various turbulence models on the existing experimental database. Here we limit the scope to one- and two-equation turbulence models where integration to the wall is used (i.e., we omit studies involving wall functions). A methodology for validating turbulence models is given, followed by an extensive evaluation of the turbulence models on the current hypersonic experimental database. A total of 18 one- and two-equation turbulence models are reviewed, and results of turbulence model assessments for the six models that have been extensively applied to the hypersonic validation database are compiled and presented in graphical form. While some of the turbulence models do provide reasonable predictions for the surface pressure, the predictions for surface heat flux are generally poor, and often in error by a factor of four or more. In the vast majority of the turbulence model validation studies we review, the authors fail to adequately address the numerical accuracy of the simulations (i.e., discretization and iterative error) and the sensitivities of the model predictions to freestream turbulence quantities or near-wall y+ mesh spacing. We recommend new hypersonic experiments be conducted which (1) measure not only surface quantities but also mean and fluctuating quantities in the interaction region and (2) provide careful estimates of both random experimental uncertainties and correlated bias errors for the measured quantities and freestream conditions. For the turbulence models, we recommend that a wide-range of turbulence models (including newer models) be re-examined on the current hypersonic experimental database, including the more recent experiments. Any future turbulence model validation efforts should carefully assess the numerical accuracy and model sensitivities. In addition, model corrections (e.g., compressibility corrections) should be carefully examined for their effects on a standard, low-speed validation database. Finally, as new experiments or direct numerical simulation data become available with information on mean and fluctuating quantities, they should be used to improve the turbulence models and thus increase their predictive capability.

  7. Experimental Validation of Model Updating and Damage Detection via Eigenvalue Sensitivity Methods with Artificial Boundary Conditions

    DTIC Science & Technology

    2017-09-01

    VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS by Matthew D. Bouwense...VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS 5. FUNDING NUMBERS 6. AUTHOR...unlimited. EXPERIMENTAL VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY

  8. Implementation of a state-to-state analytical framework for the calculation of expansion tube flow properties

    NASA Astrophysics Data System (ADS)

    James, C. M.; Gildfind, D. E.; Lewis, S. W.; Morgan, R. G.; Zander, F.

    2018-03-01

    Expansion tubes are an important type of test facility for the study of planetary entry flow-fields, being the only type of impulse facility capable of simulating the aerothermodynamics of superorbital planetary entry conditions from 10 to 20 km/s. However, the complex flow processes involved in expansion tube operation make it difficult to fully characterise flow conditions, with two-dimensional full facility computational fluid dynamics simulations often requiring tens or hundreds of thousands of computational hours to complete. In an attempt to simplify this problem and provide a rapid flow condition prediction tool, this paper presents a validated and comprehensive analytical framework for the simulation of an expansion tube facility. It identifies central flow processes and models them from state to state through the facility using established compressible and isentropic flow relations, and equilibrium and frozen chemistry. How the model simulates each section of an expansion tube is discussed, as well as how the model can be used to simulate situations where flow conditions diverge from ideal theory. The model is then validated against experimental data from the X2 expansion tube at the University of Queensland.

  9. ASM Based Synthesis of Handwritten Arabic Text Pages

    PubMed Central

    Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif; Ghoneim, Ahmed

    2015-01-01

    Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available. PMID:26295059

  10. ASM Based Synthesis of Handwritten Arabic Text Pages.

    PubMed

    Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif; Ghoneim, Ahmed

    2015-01-01

    Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available.

  11. Worldwide Protein Data Bank biocuration supporting open access to high-quality 3D structural biology data

    PubMed Central

    Westbrook, John D; Feng, Zukang; Persikova, Irina; Sala, Raul; Sen, Sanchayita; Berrisford, John M; Swaminathan, G Jawahar; Oldfield, Thomas J; Gutmanas, Aleksandras; Igarashi, Reiko; Armstrong, David R; Baskaran, Kumaran; Chen, Li; Chen, Minyu; Clark, Alice R; Di Costanzo, Luigi; Dimitropoulos, Dimitris; Gao, Guanghua; Ghosh, Sutapa; Gore, Swanand; Guranovic, Vladimir; Hendrickx, Pieter M S; Hudson, Brian P; Ikegawa, Yasuyo; Kengaku, Yumiko; Lawson, Catherine L; Liang, Yuhe; Mak, Lora; Mukhopadhyay, Abhik; Narayanan, Buvaneswari; Nishiyama, Kayoko; Patwardhan, Ardan; Sahni, Gaurav; Sanz-García, Eduardo; Sato, Junko; Sekharan, Monica R; Shao, Chenghua; Smart, Oliver S; Tan, Lihua; van Ginkel, Glen; Yang, Huanwang; Zhuravleva, Marina A; Markley, John L; Nakamura, Haruki; Kurisu, Genji; Kleywegt, Gerard J; Velankar, Sameer; Berman, Helen M; Burley, Stephen K

    2018-01-01

    Abstract The Protein Data Bank (PDB) is the single global repository for experimentally determined 3D structures of biological macromolecules and their complexes with ligands. The worldwide PDB (wwPDB) is the international collaboration that manages the PDB archive according to the FAIR principles: Findability, Accessibility, Interoperability and Reusability. The wwPDB recently developed OneDep, a unified tool for deposition, validation and biocuration of structures of biological macromolecules. All data deposited to the PDB undergo critical review by wwPDB Biocurators. This article outlines the importance of biocuration for structural biology data deposited to the PDB and describes wwPDB biocuration processes and the role of expert Biocurators in sustaining a high-quality archive. Structural data submitted to the PDB are examined for self-consistency, standardized using controlled vocabularies, cross-referenced with other biological data resources and validated for scientific/technical accuracy. We illustrate how biocuration is integral to PDB data archiving, as it facilitates accurate, consistent and comprehensive representation of biological structure data, allowing efficient and effective usage by research scientists, educators, students and the curious public worldwide. Database URL: https://www.wwpdb.org/ PMID:29688351

  12. Boeing Smart Rotor Full-scale Wind Tunnel Test Data Report

    NASA Technical Reports Server (NTRS)

    Kottapalli, Sesi; Hagerty, Brandon; Salazar, Denise

    2016-01-01

    A full-scale helicopter smart material actuated rotor technology (SMART) rotor test was conducted in the USAF National Full-Scale Aerodynamics Complex 40- by 80-Foot Wind Tunnel at NASA Ames. The SMART rotor system is a five-bladed MD 902 bearingless rotor with active trailing-edge flaps. The flaps are actuated using piezoelectric actuators. Rotor performance, structural loads, and acoustic data were obtained over a wide range of rotor shaft angles of attack, thrust, and airspeeds. The primary test objective was to acquire unique validation data for the high-performance computing analyses developed under the Defense Advanced Research Project Agency (DARPA) Helicopter Quieting Program (HQP). Other research objectives included quantifying the ability of the on-blade flaps to achieve vibration reduction, rotor smoothing, and performance improvements. This data set of rotor performance and structural loads can be used for analytical and experimental comparison studies with other full-scale rotor systems and for analytical validation of computer simulation models. The purpose of this final data report is to document a comprehensive, highquality data set that includes only data points where the flap was actively controlled and each of the five flaps behaved in a similar manner.

  13. Development and Validation of the Career Competencies Indicator (CCI)

    ERIC Educational Resources Information Center

    Francis-Smythe, Jan; Haase, Sandra; Thomas, Erica; Steele, Catherine

    2013-01-01

    This article describes the development and validation of the Career Competencies Indicator (CCI); a 43-item measure to assess career competencies (CCs). Following an extensive literature review, a comprehensive item generation process involving consultation with subject matter experts, a pilot study and a factor analytic study on a large sample…

  14. Development and Validation of an Instructional Willingness to Communicate Questionnaire

    ERIC Educational Resources Information Center

    Khatib, Mohammad; Nourzadeh, Saeed

    2015-01-01

    The current study was undertaken with the purpose of developing and validating a willingness to communicate (WTC) questionnaire for instructional language teaching and learning contexts. Six instructional WTC (IWTC) components were identified after (1) undertaking a comprehensive review of the literature on second language (L2) WTC and other…

  15. VQSEC Home Page

    Science.gov Websites

    Complex Water Impact Visitor Information Validation and Qualification Sciences Experimental Complex Our the problem space. The Validation and Qualification Sciences Experimental Complex (VQSEC) at Sandia

  16. Hypersonic Experimental and Computational Capability, Improvement and Validation. Volume 2

    NASA Technical Reports Server (NTRS)

    Muylaert, Jean (Editor); Kumar, Ajay (Editor); Dujarric, Christian (Editor)

    1998-01-01

    The results of the phase 2 effort conducted under AGARD Working Group 18 on Hypersonic Experimental and Computational Capability, Improvement and Validation are presented in this report. The first volume, published in May 1996, mainly focused on the design methodology, plans and some initial results of experiments that had been conducted to serve as validation benchmarks. The current volume presents the detailed experimental and computational data base developed during this effort.

  17. Experimental Design and Some Threats to Experimental Validity: A Primer

    ERIC Educational Resources Information Center

    Skidmore, Susan

    2008-01-01

    Experimental designs are distinguished as the best method to respond to questions involving causality. The purpose of the present paper is to explicate the logic of experimental design and why it is so vital to questions that demand causal conclusions. In addition, types of internal and external validity threats are discussed. To emphasize the…

  18. Teaching Genetics with Multimedia Results in Better Acquisition of Knowledge and Improvement in Comprehension

    ERIC Educational Resources Information Center

    Starbek, P.; Erjavec, M. Starcic; Peklaj, C.

    2010-01-01

    The main goal of this study was to explore whether the use of multimedia in genetics instruction contributes more to students' knowledge and comprehension than other instructional modes. We were also concerned with the influence of different instructional modes on the retention of knowledge and comprehension. In a quasi-experimental design, four…

  19. Acquisition of Prosodic Focus Marking by English, French, and German Three-, Four-, Five- and Six-Year-Olds

    ERIC Educational Resources Information Center

    Szendroi, Kriszta; Bernard, Carline; Berger, Frauke; Gervain, Judit; Hohle, Barbara

    2018-01-01

    Previous research on young children's knowledge of prosodic focus marking has revealed an apparent paradox, with comprehension appearing to lag behind production. Comprehension of prosodic focus is difficult to study experimentally due to its subtle and ambiguous contribution to pragmatic meaning. We designed a novel comprehension task, which…

  20. The Effects of an Intensive Reading Intervention for Ninth Graders with Very Low Reading Comprehension

    ERIC Educational Resources Information Center

    Solís, Michael; Vaughn, Sharon; Scammacca, Nancy

    2015-01-01

    This experimental study examined the efficacy of a multicomponent reading intervention compared to a control condition on the reading comprehension of adolescent students with low reading comprehension (more than 1½ standard deviations below normative sample). Ninth-grade students were randomly assigned to treatment (n = 25) and comparison (n =…

  1. Can Cloze Tests Really Improve Second Language Learners' Reading Comprehension Skills?

    ERIC Educational Resources Information Center

    Ren, Guanxin

    2011-01-01

    Cloze testing is a widely-used procedure to test learners' reading comprehension in learning a language, but little is known if it can really improve learners' reading comprehension skills. This paper attempts to seek answers to this question by comparing the cloze test scores of two groups of students (Experimental versus Control) undertaking…

  2. The Effect of Text Typographical Features on Legibility, Comprehension, and Retrieval of EFL Learners

    ERIC Educational Resources Information Center

    Soleimani, Hassan; Mohammadi, Elham

    2012-01-01

    This experimental study investigated the relationship between font type, font size, and line spacing and legibility, as measured by speed of reading, comprehension, and recalling. Instruments for testing legibility and reading comprehension were presented in eight typographical styles in print. The study tested 90 students for legibility and 76…

  3. Bringing Together Reading and Writing: An Experimental Study of Writing Intensive Reading Comprehension in Low-Performing Urban Elementary Schools

    ERIC Educational Resources Information Center

    Collins, James L.; Lee, Jaekyung; Fox, Jeffery D.; Madigan, Timothy P.

    2017-01-01

    This study examined the hypothesis that assisted writing during reading improves reading comprehension. The hypothesis was derived from sociocognitive and constructivist theory and research and implemented in the form of a curricular intervention called Writing Intensive Reading Comprehension after its main feature of bringing together reading…

  4. Enhancing Literacy Skills of Students with Congenital and Profound Hearing Impairment in Nigeria Using Babudoh's Comprehension Therapy

    ERIC Educational Resources Information Center

    Babudoh, Gladys B.

    2014-01-01

    This study reports the effect of a treatment tool called "Babudoh's comprehension therapy" in enhancing the comprehension and writing skills of 10 junior secondary school students with congenital and profound hearing impairment in Plateau State, Nigeria. The study adopted the single group pretest-posttest quasi-experimental research…

  5. Young and restless: validation of the Mind-Wandering Questionnaire (MWQ) reveals disruptive impact of mind-wandering for youth

    PubMed Central

    Mrazek, Michael D.; Phillips, Dawa T.; Franklin, Michael S.; Broadway, James M.; Schooler, Jonathan W.

    2013-01-01

    Mind-wandering is the focus of extensive investigation, yet until recently there has been no validated scale to directly measure trait levels of task-unrelated thought. Scales commonly used to assess mind-wandering lack face validity, measuring related constructs such as daydreaming or behavioral errors. Here we report four studies validating a Mind-Wandering Questionnaire (MWQ) across college, high school, and middle school samples. The 5-item scale showed high internal consistency, as well as convergent validity with existing measures of mind-wandering and related constructs. Trait levels of mind-wandering, as measured by the MWQ, were correlated with task-unrelated thought measured by thought sampling during a test of reading comprehension. In both middle school and high school samples, mind-wandering during testing was associated with worse reading comprehension. By contrast, elevated trait levels of mind-wandering predicted worse mood, less life-satisfaction, greater stress, and lower self-esteem. By extending the use of thought sampling to measure mind-wandering among adolescents, our findings also validate the use of this methodology with younger populations. Both the MWQ and thought sampling indicate that mind-wandering is a pervasive—and problematic—influence on the performance and well-being of adolescents. PMID:23986739

  6. On the Factor Structure of a Reading Comprehension Test

    ERIC Educational Resources Information Center

    Salehi, Mohammad

    2011-01-01

    To investigate the construct validly of a section of a high stakes test, an exploratory factor analysis using principal components analysis was employed. The rotation used was varimax with the suppression level of 0.30. Eleven factors were extracted out of 35 reading comprehension items. The fact that these factors emerged speak to the construct…

  7. Preliminary Evidence for the Validity of the New Test of Everyday Reading Comprehension

    ERIC Educational Resources Information Center

    Wheldall, Kevin; McMurtry, Sarah

    2014-01-01

    The Test of Everyday Reading Comprehension (TERC) has recently been presented as an addition to the armoury of tests available for assessing the skills of low-progress readers. While comparison data for students of different ages are presented together with evidence for high test reliability, there is, as yet, no published evidence for its…

  8. Validation of NOCTI Instruments Using Vocational Education Completers of Florida Comprehensive High Schools. Final Report (April 9, 1979-June 30, 1980).

    ERIC Educational Resources Information Center

    Hill, Raymond; Klein, Raymond S.

    A study examined the feasibility of adopting National Occupational Competency Testing Institute (NOCTI) examinations for use by completers of vocational programs in Florida comprehensive high schools. A total of 34 candidates in five occupational areas (architectural drafting, carpentry, plumbing, small engine repair, and welding) at four…

  9. Can the Simple View Deal with the Complexities of Reading?

    ERIC Educational Resources Information Center

    Kirby, John R.; Savage, Robert S.

    2008-01-01

    We review the Simple View of Reading (SVR) model and examine its nature, applicability and validity. We describe the SVR as an abstract framework for understanding the relationship between global linguistic comprehension and word-reading abilities in reading comprehension (RC). We argue that the SVR is neither a full theory of reading nor a…

  10. Development of a Criterion-Referenced, Performance-Based Assessment of Reading Comprehension in a Whole Literacy Program.

    ERIC Educational Resources Information Center

    Tibbetts, Katherine A.; And Others

    This paper describes the development of a criterion-referenced, performance-based measure of third grade reading comprehension. The primary purpose of the assessment is to contribute unique and valid information for use in the formative evaluation of a whole literacy program. A secondary purpose is to supplement other program efforts to…

  11. Examining Samoan Language Development in Samoan Bilingual Students' Understanding of Texts in English

    ERIC Educational Resources Information Center

    Amituanai-Toloa, Meaola; McNaughton, Stuart; Kuin Lai, Mei

    2009-01-01

    This paper examines language development of Samoan students in bilingual contexts in Aotearoa, New Zealand. In the absence of valid and standardized assessments tools in Samoan, one was designed to test reading comprehension and oral language development for Samoan students using common narratives as a base. For reading comprehension, the tool…

  12. On use of ZPR research reactors and associated instrumentation and measurement methods for reactor physics studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chauvin, J.P.; Blaise, P.; Lyoussi, A.

    2015-07-01

    The French atomic and alternative energies -CEA- is strongly involved in research and development programs concerning the use of nuclear energy as a clean and reliable source of energy and consequently is working on the present and future generation of reactors on various topics such as ageing plant management, optimization of the plutonium stockpile, waste management and innovative systems exploration. Core physics studies are an essential part of this comprehensive R and D effort. In particular, the Zero Power Reactor (ZPR) of CEA: EOLE, MINERVE and MASURCA play an important role in the validation of neutron (as well photon) physicsmore » calculation tools (codes and nuclear data). The experimental programs defined in the CEA's ZPR facilities aim at improving the calculation routes by reducing the uncertainties of the experimental databases. They also provide accurate data on innovative systems in terms of new materials (moderating and decoupling materials) and new concepts (ADS, ABWR, new MTR (e.g. JHR), GENIV) involving new fuels, absorbers and coolant materials. Conducting such interesting experimental R and D programs is based on determining and measuring main parameters of phenomena of interest to qualify calculation tools and nuclear data 'libraries'. Determining these parameters relies on the use of numerous and different experimental techniques using specific and appropriate instrumentation and detection tools. Main ZPR experimental programs at CEA, their objectives and challenges will be presented and discussed. Future development and perspectives regarding ZPR reactors and associated programs will be also presented. (authors)« less

  13. Motional timescale predictions by molecular dynamics simulations: case study using proline and hydroxyproline sidechain dynamics.

    PubMed

    Aliev, Abil E; Kulke, Martin; Khaneja, Harmeet S; Chudasama, Vijay; Sheppard, Tom D; Lanigan, Rachel M

    2014-02-01

    We propose a new approach for force field optimizations which aims at reproducing dynamics characteristics using biomolecular MD simulations, in addition to improved prediction of motionally averaged structural properties available from experiment. As the source of experimental data for dynamics fittings, we use (13) C NMR spin-lattice relaxation times T1 of backbone and sidechain carbons, which allow to determine correlation times of both overall molecular and intramolecular motions. For structural fittings, we use motionally averaged experimental values of NMR J couplings. The proline residue and its derivative 4-hydroxyproline with relatively simple cyclic structure and sidechain dynamics were chosen for the assessment of the new approach in this work. Initially, grid search and simplexed MD simulations identified large number of parameter sets which fit equally well experimental J couplings. Using the Arrhenius-type relationship between the force constant and the correlation time, the available MD data for a series of parameter sets were analyzed to predict the value of the force constant that best reproduces experimental timescale of the sidechain dynamics. Verification of the new force-field (termed as AMBER99SB-ILDNP) against NMR J couplings and correlation times showed consistent and significant improvements compared to the original force field in reproducing both structural and dynamics properties. The results suggest that matching experimental timescales of motions together with motionally averaged characteristics is the valid approach for force field parameter optimization. Such a comprehensive approach is not restricted to cyclic residues and can be extended to other amino acid residues, as well as to the backbone. Copyright © 2013 Wiley Periodicals, Inc.

  14. Modeling and characterization of multipath in global navigation satellite system ranging signals

    NASA Astrophysics Data System (ADS)

    Weiss, Jan Peter

    The Global Positioning System (GPS) provides position, velocity, and time information to users in anywhere near the earth in real-time and regardless of weather conditions. Since the system became operational, improvements in many areas have reduced systematic errors affecting GPS measurements such that multipath, defined as any signal taking a path other than the direct, has become a significant, if not dominant, error source for many applications. This dissertation utilizes several approaches to characterize and model multipath errors in GPS measurements. Multipath errors in GPS ranging signals are characterized for several receiver systems and environments. Experimental P(Y) code multipath data are analyzed for ground stations with multipath levels ranging from minimal to severe, a C-12 turboprop, an F-18 jet, and an aircraft carrier. Comparisons between receivers utilizing single patch antennas and multi-element arrays are also made. In general, the results show significant reductions in multipath with antenna array processing, although large errors can occur even with this kind of equipment. Analysis of airborne platform multipath shows that the errors tend to be small in magnitude because the size of the aircraft limits the geometric delay of multipath signals, and high in frequency because aircraft dynamics cause rapid variations in geometric delay. A comprehensive multipath model is developed and validated. The model integrates 3D structure models, satellite ephemerides, electromagnetic ray-tracing algorithms, and detailed antenna and receiver models to predict multipath errors. Validation is performed by comparing experimental and simulated multipath via overall error statistics, per satellite time histories, and frequency content analysis. The validation environments include two urban buildings, an F-18, an aircraft carrier, and a rural area where terrain multipath dominates. The validated models are used to identify multipath sources, characterize signal properties, evaluate additional antenna and receiver tracking configurations, and estimate the reflection coefficients of multipath-producing surfaces. Dynamic models for an F-18 landing on an aircraft carrier correlate aircraft dynamics to multipath frequency content; the model also characterizes the separate contributions of multipath due to the aircraft, ship, and ocean to the overall error statistics. Finally, reflection coefficients for multipath produced by terrain are estimated via a least-squares algorithm.

  15. Digitised audio questionnaire for assessment of informed consent comprehension in a low-literacy African research population: development and psychometric evaluation

    PubMed Central

    Afolabi, Muhammed O; Bojang, Kalifa; D'Alessandro, Umberto; Ota, Martin O C; Imoukhuede, Egeruan B; Ravinetto, Raffaella; Larson, Heidi J; McGrath, Nuala; Chandramohan, Daniel

    2014-01-01

    Objective To develop and psychometrically evaluate an audio digitised tool for assessment of comprehension of informed consent among low-literacy Gambian research participants. Setting We conducted this study in the Gambia where a high illiteracy rate and absence of standardised writing formats of local languages pose major challenges for research participants to comprehend consent information. We developed a 34-item questionnaire to assess participants’ comprehension of key elements of informed consent. The questionnaire was face validated and content validated by experienced researchers. To bypass the challenge of a lack of standardised writing formats, we audiorecorded the questionnaire in three major Gambian languages: Mandinka, Wolof and Fula. The questionnaire was further developed into an audio computer-assisted interview format. Participants The digitised questionnaire was administered to 250 participants enrolled in two clinical trials in the urban and rural areas of the Gambia. One week after first administration, the questionnaire was readministered to half of the participants who were randomly selected. Participants were eligible if enrolled in the parent trials and could speak any of the three major Gambian languages. Outcome measure The primary outcome measure was reliability and validity of the questionnaire. Results Item reduction by factor analysis showed that 21 of the question items have strong factor loadings. These were retained along with five other items which were fundamental components of informed consent. The 26-item questionnaire has high internal consistency with a Cronbach's α of 0.73–0.79 and an intraclass correlation coefficient of 0.94 (95% CI 0.923 to 0.954). Hypotheses testing also showed that the questionnaire has a positive correlation with a similar questionnaire and discriminates between participants with and without education. Conclusions We have developed a reliable and valid measure of comprehension of informed consent information for the Gambian context, which might be easily adapted to similar settings. This is a major step towards engendering comprehension of informed consent information among low-literacy participants. PMID:24961716

  16. Digitised audio questionnaire for assessment of informed consent comprehension in a low-literacy African research population: development and psychometric evaluation.

    PubMed

    Afolabi, Muhammed O; Bojang, Kalifa; D'Alessandro, Umberto; Ota, Martin O C; Imoukhuede, Egeruan B; Ravinetto, Raffaella; Larson, Heidi J; McGrath, Nuala; Chandramohan, Daniel

    2014-06-24

    To develop and psychometrically evaluate an audio digitised tool for assessment of comprehension of informed consent among low-literacy Gambian research participants. We conducted this study in the Gambia where a high illiteracy rate and absence of standardised writing formats of local languages pose major challenges for research participants to comprehend consent information. We developed a 34-item questionnaire to assess participants' comprehension of key elements of informed consent. The questionnaire was face validated and content validated by experienced researchers. To bypass the challenge of a lack of standardised writing formats, we audiorecorded the questionnaire in three major Gambian languages: Mandinka, Wolof and Fula. The questionnaire was further developed into an audio computer-assisted interview format. The digitised questionnaire was administered to 250 participants enrolled in two clinical trials in the urban and rural areas of the Gambia. One week after first administration, the questionnaire was readministered to half of the participants who were randomly selected. Participants were eligible if enrolled in the parent trials and could speak any of the three major Gambian languages. The primary outcome measure was reliability and validity of the questionnaire. Item reduction by factor analysis showed that 21 of the question items have strong factor loadings. These were retained along with five other items which were fundamental components of informed consent. The 26-item questionnaire has high internal consistency with a Cronbach's α of 0.73-0.79 and an intraclass correlation coefficient of 0.94 (95% CI 0.923 to 0.954). Hypotheses testing also showed that the questionnaire has a positive correlation with a similar questionnaire and discriminates between participants with and without education. We have developed a reliable and valid measure of comprehension of informed consent information for the Gambian context, which might be easily adapted to similar settings. This is a major step towards engendering comprehension of informed consent information among low-literacy participants. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  17. [Remote sensing analysis of ecological change caused by construction of the new island city: Pingtan Comprehensive Experimental Zone, Fujian Province].

    PubMed

    Wen, Xiao-le; Lin, Zheng-feng; Tang, Fei

    2015-02-01

    Pingtan Island was officially established as the 'Pingtan Comprehensive Experimental Zone of Fujian' in 2010, and it led to a surge of construction in the island city. Based on the Landsat-5 images for 2007 and the latest Landsat-8 images for 2013, this paper studied the ecological status, the temporal trends of the ecological changes and the reasons for those changes in Pingtan Comprehensive Experimental Zone at its early stage of construction, by using the remote sensing of ecological index (RSEI). The results showed that as an ecologically fragile area, Pingtan Island had a moderate level of overall ecological status. In the early construction period (from 2007 to 2013), the ecological status of the island showed a downward trend, with a 14% drop of RSEI from 0.511 in 2007 down to 0.450 in 2013, and approximately 36.5% of the area of the island faced the degradation of ecological status, which mainly occurred in the central and southwestern parts of the island. The reason for the degradation was mainly due to the large-scale construction which further damaged the scarce vegetation on the island. Therefore, in order to curb the downward trend of the ecological quality of Pingtan Comprehensive Experimental Zone, some effective ecological protection measures must be developed and implemented during the construction.

  18. Manipulating glucocorticoids in wild animals: basic and applied perspectives

    PubMed Central

    Sopinka, Natalie M.; Patterson, Lucy D.; Redfern, Julia C.; Pleizier, Naomi K.; Belanger, Cassia B.; Midwood, Jon D.; Crossin, Glenn T.; Cooke, Steven J.

    2015-01-01

    One of the most comprehensively studied responses to stressors in vertebrates is the endogenous production and regulation of glucocorticoids (GCs). Extensive laboratory research using experimental elevation of GCs in model species is instrumental in learning about stressor-induced physiological and behavioural mechanisms; however, such studies fail to inform our understanding of ecological and evolutionary processes in the wild. We reviewed emerging research that has used GC manipulations in wild vertebrates to assess GC-mediated effects on survival, physiology, behaviour, reproduction and offspring quality. Within and across taxa, exogenous manipulation of GCs increased, decreased or had no effect on traits examined in the reviewed studies. The notable diversity in responses to GC manipulation could be associated with variation in experimental methods, inherent differences among species, morphs, sexes and age classes, and the ecological conditions in which responses were measured. In their current form, results from experimental studies may be applied to animal conservation on a case-by-case basis in contexts such as threshold-based management. We discuss ways to integrate mechanistic explanations for changes in animal abundance in altered environments with functional applications that inform conservation practitioners of which species and traits may be most responsive to environmental change or human disturbance. Experimental GC manipulation holds promise for determining mechanisms underlying fitness impairment and population declines. Future work in this area should examine multiple life-history traits, with consideration of individual variation and, most importantly, validation of GC manipulations within naturally occurring and physiologically relevant ranges. PMID:27293716

  19. Numerical-experimental study of internal fixation system "Dufoo" for vertebral fractures.

    PubMed

    Nieto-Miranda, J Jesús; Faraón-Carbajal Romero, Manuel; Sánchez-Aguilar, Jons

    2012-01-01

    We describe a numerical experimental study of the stress generated by the internal fixation system "Dufoo" used in the treatment of vertebral fractures with the purpose of validating the numerical model of human lumbar vertebrae under the main physiological loads that the human body is exposed to in this area. The objective is to model and numerically simulate the elements of the musculoskeletal system to collect the stresses generated and other parameters that are difficult to measure experimentally in the thoracic lumbar vertebrae. We used an internal fixator "Dufoo" and vertebrae L2-L3-L4 specimens from pig and human. The system uses a total L3 corpectomy. The fixator acts as a mechanical bridge implant from L2 to L4. Numerical analysis was performed using the finite element method (FEM). For the experimental study, reflective photoelasticity and extensometry were used. Torsion and combined loads generate the main displacements and stresses in the study system, determining that the internal fixation carries out part of the function of the damaged organ structure when absorbing the stresses presented by applied loads. Numerical analysis allows great freedom in the management of the variables involved in the developed models using radiological images. Geometric models are obtained and are entered into FEM programs that allow testing using parameters that, under actual conditions, may not be easily carried out, allowing to comprehensively determine the biomechanical behavior of the coupled system of study.

  20. 40 CFR 761.386 - Required experimental conditions for the validation study and subsequent use during decontamination.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Comparison Study for... experimental conditions for the validation study and subsequent use during decontamination. The following...

  1. 40 CFR 761.386 - Required experimental conditions for the validation study and subsequent use during decontamination.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Comparison Study for... experimental conditions for the validation study and subsequent use during decontamination. The following...

  2. Methodological convergence of program evaluation designs.

    PubMed

    Chacón-Moscoso, Salvador; Anguera, M Teresa; Sanduvete-Chaves, Susana; Sánchez-Martín, Milagrosa

    2014-01-01

    Nowadays, the confronting dichotomous view between experimental/quasi-experimental and non-experimental/ethnographic studies still exists but, despite the extensive use of non-experimental/ethnographic studies, the most systematic work on methodological quality has been developed based on experimental and quasi-experimental studies. This hinders evaluators and planners' practice of empirical program evaluation, a sphere in which the distinction between types of study is changing continually and is less clear. Based on the classical validity framework of experimental/quasi-experimental studies, we carry out a review of the literature in order to analyze the convergence of design elements in methodological quality in primary studies in systematic reviews and ethnographic research. We specify the relevant design elements that should be taken into account in order to improve validity and generalization in program evaluation practice in different methodologies from a practical methodological and complementary view. We recommend ways to improve design elements so as to enhance validity and generalization in program evaluation practice.

  3. A Pilot Study of Biomedical Text Comprehension using an Attention-Based Deep Neural Reader: Design and Experimental Analysis

    PubMed Central

    Lee, Kyubum; Kim, Byounggun; Jeon, Minji; Kim, Jihye; Tan, Aik Choon

    2018-01-01

    Background With the development of artificial intelligence (AI) technology centered on deep-learning, the computer has evolved to a point where it can read a given text and answer a question based on the context of the text. Such a specific task is known as the task of machine comprehension. Existing machine comprehension tasks mostly use datasets of general texts, such as news articles or elementary school-level storybooks. However, no attempt has been made to determine whether an up-to-date deep learning-based machine comprehension model can also process scientific literature containing expert-level knowledge, especially in the biomedical domain. Objective This study aims to investigate whether a machine comprehension model can process biomedical articles as well as general texts. Since there is no dataset for the biomedical literature comprehension task, our work includes generating a large-scale question answering dataset using PubMed and manually evaluating the generated dataset. Methods We present an attention-based deep neural model tailored to the biomedical domain. To further enhance the performance of our model, we used a pretrained word vector and biomedical entity type embedding. We also developed an ensemble method of combining the results of several independent models to reduce the variance of the answers from the models. Results The experimental results showed that our proposed deep neural network model outperformed the baseline model by more than 7% on the new dataset. We also evaluated human performance on the new dataset. The human evaluation result showed that our deep neural model outperformed humans in comprehension by 22% on average. Conclusions In this work, we introduced a new task of machine comprehension in the biomedical domain using a deep neural model. Since there was no large-scale dataset for training deep neural models in the biomedical domain, we created the new cloze-style datasets Biomedical Knowledge Comprehension Title (BMKC_T) and Biomedical Knowledge Comprehension Last Sentence (BMKC_LS) (together referred to as BioMedical Knowledge Comprehension) using the PubMed corpus. The experimental results showed that the performance of our model is much higher than that of humans. We observed that our model performed consistently better regardless of the degree of difficulty of a text, whereas humans have difficulty when performing biomedical literature comprehension tasks that require expert level knowledge. PMID:29305341

  4. Development and validation of a Malawian version of the primary care assessment tool.

    PubMed

    Dullie, Luckson; Meland, Eivind; Hetlevik, Øystein; Mildestvedt, Thomas; Gjesdal, Sturla

    2018-05-16

    Malawi does not have validated tools for assessing primary care performance from patients' experience. The aim of this study was to develop a Malawian version of Primary Care Assessment Tool (PCAT-Mw) and to evaluate its reliability and validity in the assessment of the core primary care dimensions from adult patients' perspective in Malawi. A team of experts assessed the South African version of the primary care assessment tool (ZA-PCAT) for face and content validity. The adapted questionnaire underwent forward and backward translation and a pilot study. The tool was then used in an interviewer administered cross-sectional survey in Neno district, Malawi, to test validity and reliability. Exploratory factor analysis was performed on a random half of the sample to evaluate internal consistency, reliability and construct validity of items and scales. The identified constructs were then tested with confirmatory factor analysis. Likert scale assumption testing and descriptive statistics were done on the final factor structure. The PCAT-Mw was further tested for intra-rater and inter-rater reliability. From the responses of 631 patients, a 29-item PCAT-Mw was constructed comprising seven multi-item scales, representing five primary care dimensions (first contact, continuity, comprehensiveness, coordination and community orientation). All the seven scales achieved good internal consistency, item-total correlations and construct validity. Cronbach's alpha coefficient ranged from 0.66 to 0.91. A satisfactory goodness of fit model was achieved (GFI = 0.90, CFI = 0.91, RMSEA = 0.05, PCLOSE = 0.65). The full range of possible scores was observed for all scales. Scaling assumptions tests were achieved for all except the two comprehensiveness scales. Intra-class correlation coefficient (ICC) was 0.90 (n = 44, 95% CI 0.81-0.94, p < 0.001) for intra-rater reliability and 0.84 (n = 42, 95% CI 0.71-0.96, p < 0.001) for inter-rater reliability. Comprehensive metric analyses supported the reliability and validity of PCAT-Mw in assessing the core concepts of primary care from adult patients' experience. This tool could be used for health service research in primary care in Malawi.

  5. [The Basel Screening Instrument for Psychosis (BSIP): development, structure, reliability and validity].

    PubMed

    Riecher-Rössler, A; Aston, J; Ventura, J; Merlo, M; Borgwardt, S; Gschwandtner, U; Stieglitz, R-D

    2008-04-01

    Early detection of psychosis is of growing clinical importance. So far there is, however, no screening instrument for detecting individuals with beginning psychosis in the atypical early stages of the disease with sufficient validity. We have therefore developed the Basel Screening Instrument for Psychosis (BSIP) and tested its feasibility, interrater-reliability and validity. Aim of this paper is to describe the development and structure of the instrument, as well as to report the results of the studies on reliability and validity. The instrument was developed based on a comprehensive search of literature on the most important risk factors and early signs of schizophrenic psychoses. The interraterreliability study was conducted on 24 psychiatric cases. Validity was tested based on 206 individuals referred to our early detection clinic from 3/1/2000 until 2/28/2003. We identified seven categories of relevance for early detection of psychosis and used them to construct a semistructured interview. Interrater-reliability for high risk individuals was high (Kappa .87). Predictive validity was comparable to other, more comprehensive instruments: 16 (32 %) of 50 individuals classified as being at risk for psychosis by the BSIP have in fact developed frank psychosis within an follow-up period of two to five years. The BSIP is the first screening instrument for the early detection of psychosis which has been validated based on transition to psychosis. The BSIP is easy to use by experienced psychiatrists and has a very good interrater-reliability and predictive validity.

  6. Follicle Online: an integrated database of follicle assembly, development and ovulation.

    PubMed

    Hua, Juan; Xu, Bo; Yang, Yifan; Ban, Rongjun; Iqbal, Furhan; Cooke, Howard J; Zhang, Yuanwei; Shi, Qinghua

    2015-01-01

    Folliculogenesis is an important part of ovarian function as it provides the oocytes for female reproductive life. Characterizing genes/proteins involved in folliculogenesis is fundamental for understanding the mechanisms associated with this biological function and to cure the diseases associated with folliculogenesis. A large number of genes/proteins associated with folliculogenesis have been identified from different species. However, no dedicated public resource is currently available for folliculogenesis-related genes/proteins that are validated by experiments. Here, we are reporting a database 'Follicle Online' that provides the experimentally validated gene/protein map of the folliculogenesis in a number of species. Follicle Online is a web-based database system for storing and retrieving folliculogenesis-related experimental data. It provides detailed information for 580 genes/proteins (from 23 model organisms, including Homo sapiens, Mus musculus, Rattus norvegicus, Mesocricetus auratus, Bos Taurus, Drosophila and Xenopus laevis) that have been reported to be involved in folliculogenesis, POF (premature ovarian failure) and PCOS (polycystic ovary syndrome). The literature was manually curated from more than 43,000 published articles (till 1 March 2014). The Follicle Online database is implemented in PHP + MySQL + JavaScript and this user-friendly web application provides access to the stored data. In summary, we have developed a centralized database that provides users with comprehensive information about genes/proteins involved in folliculogenesis. This database can be accessed freely and all the stored data can be viewed without any registration. Database URL: http://mcg.ustc.edu.cn/sdap1/follicle/index.php © The Author(s) 2015. Published by Oxford University Press.

  7. Follicle Online: an integrated database of follicle assembly, development and ovulation

    PubMed Central

    Hua, Juan; Xu, Bo; Yang, Yifan; Ban, Rongjun; Iqbal, Furhan; Zhang, Yuanwei; Shi, Qinghua

    2015-01-01

    Folliculogenesis is an important part of ovarian function as it provides the oocytes for female reproductive life. Characterizing genes/proteins involved in folliculogenesis is fundamental for understanding the mechanisms associated with this biological function and to cure the diseases associated with folliculogenesis. A large number of genes/proteins associated with folliculogenesis have been identified from different species. However, no dedicated public resource is currently available for folliculogenesis-related genes/proteins that are validated by experiments. Here, we are reporting a database ‘Follicle Online’ that provides the experimentally validated gene/protein map of the folliculogenesis in a number of species. Follicle Online is a web-based database system for storing and retrieving folliculogenesis-related experimental data. It provides detailed information for 580 genes/proteins (from 23 model organisms, including Homo sapiens, Mus musculus, Rattus norvegicus, Mesocricetus auratus, Bos Taurus, Drosophila and Xenopus laevis) that have been reported to be involved in folliculogenesis, POF (premature ovarian failure) and PCOS (polycystic ovary syndrome). The literature was manually curated from more than 43 000 published articles (till 1 March 2014). The Follicle Online database is implemented in PHP + MySQL + JavaScript and this user-friendly web application provides access to the stored data. In summary, we have developed a centralized database that provides users with comprehensive information about genes/proteins involved in folliculogenesis. This database can be accessed freely and all the stored data can be viewed without any registration. Database URL: http://mcg.ustc.edu.cn/sdap1/follicle/index.php PMID:25931457

  8. Experimental Validation Techniques for the Heleeos Off-Axis Laser Propagation Model

    DTIC Science & Technology

    2010-03-01

    EXPERIMENTAL VALIDATION TECHNIQUES FOR THE HELEEOS OFF-AXIS LASER PROPAGATION MODEL THESIS John Haiducek, 1st Lt, USAF AFIT/GAP/ENP/10-M07 DEPARTMENT...Department of Defense, or the United States Government. AFIT/GAP/ENP/10-M07 EXPERIMENTAL VALIDATION TECHNIQUES FOR THE HELEEOS OFF-AXIS LASER ...BS, Physics 1st Lt, USAF March 2010 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT/GAP/ENP/10-M07 Abstract The High Energy Laser End-to-End

  9. The humankind genome: from genetic diversity to the origin of human diseases.

    PubMed

    Belizário, Jose E

    2013-12-01

    Genome-wide association studies have failed to establish common variant risk for the majority of common human diseases. The underlying reasons for this failure are explained by recent studies of resequencing and comparison of over 1200 human genomes and 10 000 exomes, together with the delineation of DNA methylation patterns (epigenome) and full characterization of coding and noncoding RNAs (transcriptome) being transcribed. These studies have provided the most comprehensive catalogues of functional elements and genetic variants that are now available for global integrative analysis and experimental validation in prospective cohort studies. With these datasets, researchers will have unparalleled opportunities for the alignment, mining, and testing of hypotheses for the roles of specific genetic variants, including copy number variations, single nucleotide polymorphisms, and indels as the cause of specific phenotypes and diseases. Through the use of next-generation sequencing technologies for genotyping and standardized ontological annotation to systematically analyze the effects of genomic variation on humans and model organism phenotypes, we will be able to find candidate genes and new clues for disease's etiology and treatment. This article describes essential concepts in genetics and genomic technologies as well as the emerging computational framework to comprehensively search websites and platforms available for the analysis and interpretation of genomic data.

  10. Comprehensive analysis of ß-lactam antibiotics including penicillins, cephalosporins, and carbapenems in poultry muscle using liquid chromatography coupled to tandem mass spectrometry.

    PubMed

    Berendsen, Bjorn J A; Gerritsen, Henk W; Wegh, Robin S; Lameris, Steven; van Sebille, Ralph; Stolker, Alida A M; Nielen, Michel W F

    2013-09-01

    A comprehensive method for the quantitative residue analysis of trace levels of 22 ß-lactam antibiotics, including penicillins, cephalosporins, and carbapenems, in poultry muscle by liquid chromatography in combination with tandem mass spectrometric detection is reported. The samples analyzed for ß-lactam residues are hydrolyzed using piperidine in order to improve compound stability and to include the total residue content of the cephalosporin ceftifour. The reaction procedure was optimized using a full experimental design. Following detailed isotope labeling, tandem mass spectrometry studies and exact mass measurements using high-resolution mass spectrometry reaction schemes could be proposed for all ß-lactams studied. The main reaction occurring is the hydrolysis of the ß-lactam ring under formation of the piperidine substituted amide. For some ß-lactams, multiple isobaric hydrolysis reaction products are obtained, in accordance with expectations, but this did not hamper quantitative analysis. The final method was fully validated as a quantitative confirmatory residue analysis method according to Commission Decision 2002/657/EC and showed satisfactory quantitative performance for all compounds with trueness between 80 and 110% and within-laboratory reproducibility below 22% at target level, except for biapenem. For biapenem, the method proved to be suitable for qualitative analysis only.

  11. A Computational Model of Torque Generation: Neural, Contractile, Metabolic and Musculoskeletal Components

    PubMed Central

    Callahan, Damien M.; Umberger, Brian R.; Kent-Braun, Jane A.

    2013-01-01

    The pathway of voluntary joint torque production includes motor neuron recruitment and rate-coding, sarcolemmal depolarization and calcium release by the sarcoplasmic reticulum, force generation by motor proteins within skeletal muscle, and force transmission by tendon across the joint. The direct source of energetic support for this process is ATP hydrolysis. It is possible to examine portions of this physiologic pathway using various in vivo and in vitro techniques, but an integrated view of the multiple processes that ultimately impact joint torque remains elusive. To address this gap, we present a comprehensive computational model of the combined neuromuscular and musculoskeletal systems that includes novel components related to intracellular bioenergetics function. Components representing excitatory drive, muscle activation, force generation, metabolic perturbations, and torque production during voluntary human ankle dorsiflexion were constructed, using a combination of experimentally-derived data and literature values. Simulation results were validated by comparison with torque and metabolic data obtained in vivo. The model successfully predicted peak and submaximal voluntary and electrically-elicited torque output, and accurately simulated the metabolic perturbations associated with voluntary contractions. This novel, comprehensive model could be used to better understand impact of global effectors such as age and disease on various components of the neuromuscular system, and ultimately, voluntary torque output. PMID:23405245

  12. ASPsiRNA: A Resource of ASP-siRNAs Having Therapeutic Potential for Human Genetic Disorders and Algorithm for Prediction of Their Inhibitory Efficacy

    PubMed Central

    Monga, Isha; Qureshi, Abid; Thakur, Nishant; Gupta, Amit Kumar; Kumar, Manoj

    2017-01-01

    Allele-specific siRNAs (ASP-siRNAs) have emerged as promising therapeutic molecules owing to their selectivity to inhibit the mutant allele or associated single-nucleotide polymorphisms (SNPs) sparing the expression of the wild-type counterpart. Thus, a dedicated bioinformatics platform encompassing updated ASP-siRNAs and an algorithm for the prediction of their inhibitory efficacy will be helpful in tackling currently intractable genetic disorders. In the present study, we have developed the ASPsiRNA resource (http://crdd.osdd.net/servers/aspsirna/) covering three components viz (i) ASPsiDb, (ii) ASPsiPred, and (iii) analysis tools like ASP-siOffTar. ASPsiDb is a manually curated database harboring 4543 (including 422 chemically modified) ASP-siRNAs targeting 78 unique genes involved in 51 different diseases. It furnishes comprehensive information from experimental studies on ASP-siRNAs along with multidimensional genetic and clinical information for numerous mutations. ASPsiPred is a two-layered algorithm to predict efficacy of ASP-siRNAs for fully complementary mutant (Effmut) and wild-type allele (Effwild) with one mismatch by ASPsiPredSVM and ASPsiPredmatrix, respectively. In ASPsiPredSVM, 922 unique ASP-siRNAs with experimentally validated quantitative Effmut were used. During 10-fold cross-validation (10nCV) employing various sequence features on the training/testing dataset (T737), the best predictive model achieved a maximum Pearson’s correlation coefficient (PCC) of 0.71. Further, the accuracy of the classifier to predict Effmut against novel genes was assessed by leave one target out cross-validation approach (LOTOCV). ASPsiPredmatrix was constructed from rule-based studies describing the effect of single siRNA:mRNA mismatches on the efficacy at 19 different locations of siRNA. Thus, ASPsiRNA encompasses the first database, prediction algorithm, and off-target analysis tool that is expected to accelerate research in the field of RNAi-based therapeutics for human genetic diseases. PMID:28696921

  13. A Toolkit to Study Sensitivity of the Geant4 Predictions to the Variations of the Physics Model Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fields, Laura; Genser, Krzysztof; Hatcher, Robert

    Geant4 is the leading detector simulation toolkit used in high energy physics to design detectors and to optimize calibration and reconstruction software. It employs a set of carefully validated physics models to simulate interactions of particles with matter across a wide range of interaction energies. These models, especially the hadronic ones, rely largely on directly measured cross-sections and phenomenological predictions with physically motivated parameters estimated by theoretical calculation or measurement. Because these models are tuned to cover a very wide range of possible simulation tasks, they may not always be optimized for a given process or a given material. Thismore » raises several critical questions, e.g. how sensitive Geant4 predictions are to the variations of the model parameters, or what uncertainties are associated with a particular tune of a Geant4 physics model, or a group of models, or how to consistently derive guidance for Geant4 model development and improvement from a wide range of available experimental data. We have designed and implemented a comprehensive, modular, user-friendly software toolkit to study and address such questions. It allows one to easily modify parameters of one or several Geant4 physics models involved in the simulation, and to perform collective analysis of multiple variants of the resulting physics observables of interest and comparison against a variety of corresponding experimental data. Based on modern event-processing infrastructure software, the toolkit offers a variety of attractive features, e.g. flexible run-time configurable workflow, comprehensive bookkeeping, easy to expand collection of analytical components. Design, implementation technology, and key functionalities of the toolkit are presented and illustrated with results obtained with Geant4 key hadronic models.« less

  14. A comprehensive approach to identify reliable reference gene candidates to investigate the link between alcoholism and endocrinology in Sprague-Dawley rats.

    PubMed

    Taki, Faten A; Abdel-Rahman, Abdel A; Zhang, Baohong

    2014-01-01

    Gender and hormonal differences are often correlated with alcohol dependence and related complications like addiction and breast cancer. Estrogen (E2) is an important sex hormone because it serves as a key protein involved in organism level signaling pathways. Alcoholism has been reported to affect estrogen receptor signaling; however, identifying the players involved in such multi-faceted syndrome is complex and requires an interdisciplinary approach. In many situations, preliminary investigations included a straight forward, yet informative biotechniques such as gene expression analyses using quantitative real time PCR (qRT-PCR). The validity of qRT-PCR-based conclusions is affected by the choice of reliable internal controls. With this in mind, we compiled a list of 15 commonly used housekeeping genes (HKGs) as potential reference gene candidates in rat biological models. A comprehensive comparison among 5 statistical approaches (geNorm, dCt method, NormFinder, BestKeeper, and RefFinder) was performed to identify the minimal number as well the most stable reference genes required for reliable normalization in experimental rat groups that comprised sham operated (SO), ovariectomized rats in the absence (OVX) or presence of E2 (OVXE2). These rat groups were subdivided into subgroups that received alcohol in liquid diet or isocalroic control liquid diet for 12 weeks. Our results showed that U87, 5S rRNA, GAPDH, and U5a were the most reliable gene candidates for reference genes in heart and brain tissue. However, different gene stability ranking was specific for each tissue input combination. The present preliminary findings highlight the variability in reference gene rankings across different experimental conditions and analytic methods and constitute a fundamental step for gene expression assays.

  15. The Comparative Effects of Comprehensible Input, Output and Corrective Feedback on the Receptive Acquisition of L2 Vocabulary Items

    ERIC Educational Resources Information Center

    Nowbakht, Mohammad; Shahnazari, Mohammadtaghi

    2015-01-01

    In the present study, the comparative effects of comprehensible input, output and corrective feedback on the receptive acquisition of L2 vocabulary items were investigated. Two groups of beginning EFL learners participated in the study. The control group received comprehensible input only, while the experimental group received input and was…

  16. Measuring organizational readiness for knowledge translation in chronic care.

    PubMed

    Gagnon, Marie-Pierre; Labarthe, Jenni; Légaré, France; Ouimet, Mathieu; Estabrooks, Carole A; Roch, Geneviève; Ghandour, El Kebir; Grimshaw, Jeremy

    2011-07-13

    Knowledge translation (KT) is an imperative in order to implement research-based and contextualized practices that can answer the numerous challenges of complex health problems. The Chronic Care Model (CCM) provides a conceptual framework to guide the implementation process in chronic care. Yet, organizations aiming to improve chronic care require an adequate level of organizational readiness (OR) for KT. Available instruments on organizational readiness for change (ORC) have shown limited validity, and are not tailored or adapted to specific phases of the knowledge-to-action (KTA) process. We aim to develop an evidence-based, comprehensive, and valid instrument to measure OR for KT in healthcare. The OR for KT instrument will be based on core concepts retrieved from existing literature and validated by a Delphi study. We will specifically test the instrument in chronic care that is of an increasing importance for the health system. Phase one: We will conduct a systematic review of the theories and instruments assessing ORC in healthcare. The retained theoretical information will be synthesized in a conceptual map. A bibliography and database of ORC instruments will be prepared after appraisal of their psychometric properties according to the standards for educational and psychological testing. An online Delphi study will be carried out among decision makers and knowledge users across Canada to assess the importance of these concepts and measures at different steps in the KTA process in chronic care.Phase two: A final OR for KT instrument will be developed and validated both in French and in English and tested in chronic disease management to measure OR for KT regarding the adoption of comprehensive, patient-centered, and system-based CCMs. This study provides a comprehensive synthesis of current knowledge on explanatory models and instruments assessing OR for KT. Moreover, this project aims to create more consensus on the theoretical underpinnings and the instrumentation of OR for KT in chronic care. The final product--a comprehensive and valid OR for KT instrument--will provide the chronic care settings with an instrument to assess their readiness to implement evidence-based chronic care.

  17. Measuring organizational readiness for knowledge translation in chronic care

    PubMed Central

    2011-01-01

    Background Knowledge translation (KT) is an imperative in order to implement research-based and contextualized practices that can answer the numerous challenges of complex health problems. The Chronic Care Model (CCM) provides a conceptual framework to guide the implementation process in chronic care. Yet, organizations aiming to improve chronic care require an adequate level of organizational readiness (OR) for KT. Available instruments on organizational readiness for change (ORC) have shown limited validity, and are not tailored or adapted to specific phases of the knowledge-to-action (KTA) process. We aim to develop an evidence-based, comprehensive, and valid instrument to measure OR for KT in healthcare. The OR for KT instrument will be based on core concepts retrieved from existing literature and validated by a Delphi study. We will specifically test the instrument in chronic care that is of an increasing importance for the health system. Methods Phase one: We will conduct a systematic review of the theories and instruments assessing ORC in healthcare. The retained theoretical information will be synthesized in a conceptual map. A bibliography and database of ORC instruments will be prepared after appraisal of their psychometric properties according to the standards for educational and psychological testing. An online Delphi study will be carried out among decision makers and knowledge users across Canada to assess the importance of these concepts and measures at different steps in the KTA process in chronic care. Phase two: A final OR for KT instrument will be developed and validated both in French and in English and tested in chronic disease management to measure OR for KT regarding the adoption of comprehensive, patient-centered, and system-based CCMs. Discussion This study provides a comprehensive synthesis of current knowledge on explanatory models and instruments assessing OR for KT. Moreover, this project aims to create more consensus on the theoretical underpinnings and the instrumentation of OR for KT in chronic care. The final product--a comprehensive and valid OR for KT instrument--will provide the chronic care settings with an instrument to assess their readiness to implement evidence-based chronic care. PMID:21752264

  18. National IQs Calculated and Validated for 108 Nations

    ERIC Educational Resources Information Center

    Lynn, Richard; Meisenberg, Gerhard

    2010-01-01

    We estimate the validity of the national IQs presented by Lynn and Vanhanen (2002, 2006) by examining whether they are consistent with the educational attainment of school students in math, science and reading comprehension in 108 countries and provinces. The educational attainment scores in a number of studies are integrated to give EAs…

  19. A Conversation Analysis-Informed Test of L2 Aural Pragmatic Comprehension

    ERIC Educational Resources Information Center

    Walters, F. Scott

    2009-01-01

    Speech act theory-based, second language pragmatics testing (SLPT) raises test-validation issues owing to a lack of correspondence with empirical conversational data. On the assumption that conversation analysis (CA) provides a more accurate account of language use, it is suggested that CA serve as a more empirically valid basis for SLPT…

  20. The Strengths and Difficulties Questionnaire (SDQ): The Factor Structure and Scale Validation in U.S. Adolescents

    ERIC Educational Resources Information Center

    He, Jian-Ping; Burstein, Marcy; Schmitz, Anja; Merikangas, Kathleen R.

    2013-01-01

    The Strengths and Difficulties Questionnaire (SDQ) is one of the most commonly used instruments for screening psychopathology in children and adolescents. This study evaluated the hypothesized five-factor structure of the SDQ and examined its convergent validity against comprehensive clinical diagnostic assessments. Data were derived from the…

  1. Development and Validation of a Christian-Based Grief Recovery Scale

    ERIC Educational Resources Information Center

    Jen Der Pan, Peter; Deng, Liang-Yu F.; Tsai, S. L.; Chen, Ho-Yuan J.; Yuan, Sheng-Shiou Jenny

    2014-01-01

    The purpose of this study was to develop and validate a Christian-based Grief Recovery Scale (CGRS) which was used to measure Christians recovering from grief after a significant loss. Taiwanese Christian participants were recruited from churches and a comprehensive university in northern Taiwan. They were affected by both the Christian faith and…

  2. Deriving Childhood Temperament Measures from Emotion-Eliciting Behavioral Episodes: Scale Construction and Initial Validation

    ERIC Educational Resources Information Center

    Gagne, Jeffrey R.; Van Hulle, Carol A.; Aksan, Nazan; Essex, Marilyn J.; Goldsmith, H. Hill

    2011-01-01

    The authors describe the development and initial validation of a home-based version of the Laboratory Temperament Assessment Battery (Lab-TAB), which was designed to assess childhood temperament with a comprehensive series of emotion-eliciting behavioral episodes. This article provides researchers with general guidelines for assessing specific…

  3. Getting a Picture that Is Both Accurate and Stable: Situation Models and Epistemic Validation

    ERIC Educational Resources Information Center

    Schroeder, Sascha; Richter, Tobias; Hoever, Inga

    2008-01-01

    Text comprehension entails the construction of a situation model that prepares individuals for situated action. In order to meet this function, situation model representations are required to be both accurate and stable. We propose a framework according to which comprehenders rely on epistemic validation to prevent inaccurate information from…

  4. A Longitudinal Study of the Predictive Validity of a Kindergarten Screening Battery.

    ERIC Educational Resources Information Center

    Kilgallon, Mary K.; Mueller, Richard J.

    Test validity was studied in nine subtests of a kindergarten screening battery used to predict reading comprehension for children up to five years after entering kindergarten. The independent variables were kindergarteners' scores on the: (1) Otis-Lennon Mental Ability Test; (2) Bender Visual Motor Gestalt Test; (3) Detroit Tests of Learning…

  5. The Reliability, Validity, and Usefulness of the Objective Structured Clinical Examination (OSCE) in Dental Education

    ERIC Educational Resources Information Center

    Graham, Roseanna

    2010-01-01

    This study evaluated the reliability, validity, and educational usefulness of a comprehensive, multidisciplinary Objective Structured Clinical Examination (OSCE) in dental education. The OSCE was administered to dental students at the Columbia University College of Dental Medicine (CDM) before they entered clinical training. Participants in this…

  6. Technical Adequacy of the easyCBM Grade 2 Reading Measures. Technical Report #1004

    ERIC Educational Resources Information Center

    Jamgochian, Elisa; Park, Bitnara Jasmine; Nese, Joseph F. T.; Lai, Cheng-Fei; Saez, Leilani; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2010-01-01

    In this technical report, we provide reliability and validity evidence for the easyCBM[R] Reading measures for grade 2 (word and passage reading fluency and multiple choice reading comprehension). Evidence for reliability includes internal consistency and item invariance. Evidence for validity includes concurrent, predictive, and construct…

  7. Improving text comprehension strategies in upper primary school children: a design experiment.

    PubMed

    De Corte, E; Verschaffel, L; Van De Ven, A

    2001-12-01

    With respect to the acquisition of competence in reading, new standards for primary education stress more than before the importance of learning and teaching cognitive and metacognitive strategies that facilitate text comprehension. Therefore, there is a need to design a research-based instructional approach to strategic reading comprehension. The design experiment aimed at developing, implementing and evaluating a research-based, but also practically applicable learning environment for enhancing skilled strategy use in upper primary school children when reading a text. Four text comprehension strategies (activating prior knowledge, clarifying difficult words, making a schematic representation of the text, and formulating the main idea) and a metacognitive strategy (regulating one's own reading process) were trained through a variety of highly interactive instructional techniques, namely modelling, whole class discussion, and small group work in the format of reciprocal teaching. Participants in the study were four experimental 5th grade classes (79 children) and eight comparable control classes (149 pupils). The effects of the learning environment were measured using a pretest-post-test-retention design. Multilevel hierarchical linear regression models were used to analyse the quantitative data of a Reading Strategy Test, a standardised Reading Comprehension Test, a Reading Attitude Scale, a Transfer Test and an interview about strategy use during reading. The data of the Reading Strategy Test, the Transfer Test and the interviews about strategy use showed that the experimental group out-performed the control group in terms of the strategy adoption and application during text reading. Whilst the experimental group also scored higher on the Reading Comprehension Test than the control group, the difference was not significant. This design experiment shows that it is possible to foster pupils' use and transfer of strategic reading comprehension skills in regular classrooms by immersing them in a powerful learning environment. But this intervention does not automatically result in improvement of performance on a standardised reading comprehension test.

  8. Open ISEmeter: An open hardware high-impedance interface for potentiometric detection.

    PubMed

    Salvador, C; Mesa, M S; Durán, E; Alvarez, J L; Carbajo, J; Mozo, J D

    2016-05-01

    In this work, a new open hardware interface based on Arduino to read electromotive force (emf) from potentiometric detectors is presented. The interface has been fully designed with the open code philosophy and all documentation will be accessible on web. The paper describes a comprehensive project including the electronic design, the firmware loaded on Arduino, and the Java-coded graphical user interface to load data in a computer (PC or Mac) for processing. The prototype was tested by measuring the calibration curve of a detector. As detection element, an active poly(vinyl chloride)-based membrane was used, doped with cetyltrimethylammonium dodecylsulphate (CTA(+)-DS(-)). The experimental measures of emf indicate Nernstian behaviour with the CTA(+) content of test solutions, as it was described in the literature, proving the validity of the developed prototype. A comparative analysis of performance was made by using the same chemical detector but changing the measurement instrumentation.

  9. The existential function of close relationships: introducing death into the science of love.

    PubMed

    Mikulincer, Mario; Florian, Victor; Hirschberger, Gilad

    2003-01-01

    Originally, terror management theory proposed two psychological mechanisms in dealing with the terror of death awareness-cultural worldview validation and self-esteem enhancement. In this article, we would like to promote the idea of close relationships as an additional death-anxiety buffering mechanism and review a growing body of empirical data that support this contention. Based on a comprehensive analysis of the sociocultural and personal functions of close relationships, we formulate two basic hypotheses that have received empirical support in a series of experimental studies. First, death reminders heighten the motivation to form and maintain close relationships. Second, the maintenance of close relationships provides a symbolic shield against the terror of death, whereas the breaking of close relationships results in an upsurge of death awareness. In addition, we present empirical evidence supporting the possibility that close relationships function as a related yet separate mechanism from the self-esteem and cultural worldview defenses.

  10. Thermal, size and surface effects on the nonlinear pull-in of small-scale piezoelectric actuators

    NASA Astrophysics Data System (ADS)

    SoltanRezaee, Masoud; Ghazavi, Mohammad-Reza

    2017-09-01

    Electrostatically actuated miniature wires/tubes have many operational applications in the high-tech industries. In this research, the nonlinear pull-in instability of piezoelectric thermal small-scale switches subjected to Coulomb and dissipative forces is analyzed using strain gradient and modified couple stress theories. The discretized governing equation is solved numerically by means of the step-by-step linearization method. The correctness of the formulated model and solution procedure is validated through comparison with experimental and several theoretical results. Herein, the length-scale, surface energy, van der Waals attraction and nonlinear curvature are considered in the present comprehensive model and the thermo-electro-mechanical behavior of cantilever piezo-beams are discussed in detail. It is found that the piezoelectric actuation can be used as a design parameter to control the pull-in phenomenon. The obtained results are applicable in stability analysis, practical design and control of actuated miniature intelligent devices.

  11. A model predictive current control of flux-switching permanent magnet machines for torque ripple minimization

    NASA Astrophysics Data System (ADS)

    Huang, Wentao; Hua, Wei; Yu, Feng

    2017-05-01

    Due to high airgap flux density generated by magnets and the special double salient structure, the cogging torque of the flux-switching permanent magnet (FSPM) machine is considerable, which limits the further applications. Based on the model predictive current control (MPCC) and the compensation control theory, a compensating-current MPCC (CC-MPCC) scheme is proposed and implemented to counteract the dominated components in cogging torque of an existing three-phase 12/10 FSPM prototyped machine, and thus to alleviate the influence of the cogging torque and improve the smoothness of electromagnetic torque as well as speed, where a comprehensive cost function is designed to evaluate the switching states. The simulated results indicate that the proposed CC-MPCC scheme can suppress the torque ripple significantly and offer satisfactory dynamic performances by comparisons with the conventional MPCC strategy. Finally, experimental results validate both the theoretical and simulated predictions.

  12. DOCKTITE-a highly versatile step-by-step workflow for covalent docking and virtual screening in the molecular operating environment.

    PubMed

    Scholz, Christoph; Knorr, Sabine; Hamacher, Kay; Schmidt, Boris

    2015-02-23

    The formation of a covalent bond with the target is essential for a number of successful drugs, yet tools for covalent docking without significant restrictions regarding warhead or receptor classes are rare and limited in use. In this work we present DOCKTITE, a highly versatile workflow for covalent docking in the Molecular Operating Environment (MOE) combining automated warhead screening, nucleophilic side chain attachment, pharmacophore-based docking, and a novel consensus scoring approach. The comprehensive validation study includes pose predictions of 35 protein/ligand complexes which resulted in a mean RMSD of 1.74 Å and a prediction rate of 71.4% with an RMSD below 2 Å, a virtual screening with an area under the curve (AUC) for the receiver operating characteristics (ROC) of 0.81, and a significant correlation between predicted and experimental binding affinities (ρ = 0.806, R(2) = 0.649, p < 0.005).

  13. Standards and Methodologies for Characterizing Radiobiological Impact of High-Z Nanoparticles

    PubMed Central

    Subiel, Anna; Ashmore, Reece; Schettino, Giuseppe

    2016-01-01

    Research on the application of high-Z nanoparticles (NPs) in cancer treatment and diagnosis has recently been the subject of growing interest, with much promise being shown with regards to a potential transition into clinical practice. In spite of numerous publications related to the development and application of nanoparticles for use with ionizing radiation, the literature is lacking coherent and systematic experimental approaches to fully evaluate the radiobiological effectiveness of NPs, validate mechanistic models and allow direct comparison of the studies undertaken by various research groups. The lack of standards and established methodology is commonly recognised as a major obstacle for the transition of innovative research ideas into clinical practice. This review provides a comprehensive overview of radiobiological techniques and quantification methods used in in vitro studies on high-Z nanoparticles and aims to provide recommendations for future standardization for NP-mediated radiation research. PMID:27446499

  14. A systems biology-based investigation into the therapeutic effects of Gansui Banxia Tang on reversing the imbalanced network of hepatocellular carcinoma

    NASA Astrophysics Data System (ADS)

    Zhang, Yanqiong; Guo, Xiaodong; Wang, Danhua; Li, Ruisheng; Li, Xiaojuan; Xu, Ying; Liu, Zhenli; Song, Zhiqian; Lin, Ya; Li, Zhiyan; Lin, Na

    2014-02-01

    Several complex molecular events are involved in tumorigenesis of hepatocellular carcinoma (HCC). The interactions of these molecules may constitute the HCC imbalanced network. Gansui Banxia Tang (GSBXT), as a classic Chinese herbal formula, is a popular complementary and alternative medicine modality for treating HCC. In order to investigate the therapeutic effects and the pharmacological mechanisms of GSBXT on reversing HCC imbalanced network, we in the current study developed a comprehensive systems approach of integrating disease-specific and drug-specific networks, and successfully revealed the relationships of the ingredients in GSBXT with their putative targets, and with HCC significant molecules and HCC related pathway systems for the first time. Meanwhile, further experimental validation also demonstrated the preventive effects of GSBXT on tumor growth in mice and its regulatory effects on potential targets.

  15. Determination of the oil distribution in a hermetic compressor using numerical simulation

    NASA Astrophysics Data System (ADS)

    Posch, S.; Hopfgartner, J.; Berger, E.; Zuber, B.; Almbauer, R.; Schöllauf, P.

    2017-08-01

    In addition to the reduction of friction the oil in a hermetic compressor is very important for the transfer of heat from hot parts to the compressor shell. The simulation of the oil distribution in a hermetic reciprocating compressor for refrigeration application is shown in the present work. Using the commercial Computational Fluid Dynamics (CFD) software ANSYS Fluent, the oil flow inside the compressor shell from the oil pump outlet to the oil sump is calculated. A comprehensive overview of the used models and the boundary conditions is given. After reaching steady-state conditions the oil covered surfaces are analysed concerning heat transfer coefficients. The gained heat transfer coefficients are used as input parameters for a thermal model of a hermetic compressor. An increase in accuracy of the thermal model with the simulated heat transfer coefficients compared to values from literature is shown by model validation with experimental data.

  16. Integrated simulations for fusion research in the 2030's time frame (white paper outline)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, Alex; LoDestro, Lynda L.; Parker, Jeffrey B.

    This white paper presents the rationale for developing a community-wide capability for whole-device modeling, and advocates for an effort with the expectation of persistence: a long-term programmatic commitment, and support for community efforts. Statement of 2030 goal (two suggestions): (a) Robust integrated simulation tools to aid real-time experimental discharges and reactor designs by employing a hierarchy in fidelity of physics models. (b) To produce by the early 2030s a capability for validated, predictive simulation via integration of a suite of physics models from moderate through high fidelity, to understand and plan full plasma discharges, aid in data interpretation, carry outmore » discovery science, and optimize future machine designs. We can achieve this goal via a focused effort to extend current scientific capabilities and rigorously integrate simulations of disparate physics into a comprehensive set of workflows.« less

  17. The development of scientific thinking in elementary school: a comprehensive inventory.

    PubMed

    Koerber, Susanne; Mayer, Daniela; Osterhaus, Christopher; Schwippert, Knut; Sodian, Beate

    2015-01-01

    The development of scientific thinking was assessed in 1,581 second, third, and fourth graders (8-, 9-, 10-year-olds) based on a conceptual model that posits developmental progression from naïve to more advanced conceptions. Using a 66-item scale, five components of scientific thinking were addressed, including experimental design, data interpretation, and understanding the nature of science. Unidimensional and multidimensional item response theory analyses supported the instrument's reliability and validity and suggested that the multiple components of scientific thinking form a unitary construct, independent of verbal or reasoning skills. A partial credit model gave evidence for a hierarchical developmental progression. Across each grade transition, advanced conceptions increased while naïve conceptions decreased. Independent effects of intelligence, schooling, and parental education on scientific thinking are discussed. © 2014 The Authors. Child Development © 2014 Society for Research in Child Development, Inc.

  18. Improving Microbial Genome Annotations in an Integrated Database Context

    PubMed Central

    Chen, I-Min A.; Markowitz, Victor M.; Chu, Ken; Anderson, Iain; Mavromatis, Konstantinos; Kyrpides, Nikos C.; Ivanova, Natalia N.

    2013-01-01

    Effective comparative analysis of microbial genomes requires a consistent and complete view of biological data. Consistency regards the biological coherence of annotations, while completeness regards the extent and coverage of functional characterization for genomes. We have developed tools that allow scientists to assess and improve the consistency and completeness of microbial genome annotations in the context of the Integrated Microbial Genomes (IMG) family of systems. All publicly available microbial genomes are characterized in IMG using different functional annotation and pathway resources, thus providing a comprehensive framework for identifying and resolving annotation discrepancies. A rule based system for predicting phenotypes in IMG provides a powerful mechanism for validating functional annotations, whereby the phenotypic traits of an organism are inferred based on the presence of certain metabolic reactions and pathways and compared to experimentally observed phenotypes. The IMG family of systems are available at http://img.jgi.doe.gov/. PMID:23424620

  19. Snapshot Hyperspectral Volumetric Microscopy

    NASA Astrophysics Data System (ADS)

    Wu, Jiamin; Xiong, Bo; Lin, Xing; He, Jijun; Suo, Jinli; Dai, Qionghai

    2016-04-01

    The comprehensive analysis of biological specimens brings about the demand for capturing the spatial, temporal and spectral dimensions of visual information together. However, such high-dimensional video acquisition faces major challenges in developing large data throughput and effective multiplexing techniques. Here, we report the snapshot hyperspectral volumetric microscopy that computationally reconstructs hyperspectral profiles for high-resolution volumes of ~1000 μm × 1000 μm × 500 μm at video rate by a novel four-dimensional (4D) deconvolution algorithm. We validated the proposed approach with both numerical simulations for quantitative evaluation and various real experimental results on the prototype system. Different applications such as biological component analysis in bright field and spectral unmixing of multiple fluorescence are demonstrated. The experiments on moving fluorescent beads and GFP labelled drosophila larvae indicate the great potential of our method for observing multiple fluorescent markers in dynamic specimens.

  20. Improved Force Fields for Peptide Nucleic Acids with Optimized Backbone Torsion Parameters.

    PubMed

    Jasiński, Maciej; Feig, Michael; Trylska, Joanna

    2018-06-06

    Peptide nucleic acids are promising nucleic acid analogs for antisense therapies as they can form stable duplex and triplex structures with DNA and RNA. Computational studies of PNA-containing duplexes and triplexes are an important component for guiding their design, yet existing force fields have not been well validated and parametrized with modern computational capabilities. We present updated CHARMM and Amber force fields for PNA that greatly improve the stability of simulated PNA-containing duplexes and triplexes in comparison with experimental structures and allow such systems to be studied on microsecond time scales. The force field modifications focus on reparametrized PNA backbone torsion angles to match high-level quantum mechanics reference energies for a model compound. The microsecond simulations of PNA-PNA, PNA-DNA, PNA-RNA, and PNA-DNA-PNA complexes also allowed a comprehensive analysis of hydration and ion interactions with such systems.

  1. The gravitational field and brain function.

    PubMed

    Mei, L; Zhou, C D; Lan, J Q; Wang, Z G; Wu, W C; Xue, X M

    1983-01-01

    The frontal cortex is recognized as the highest adaptive control center of the human brain. The principle of the "frontalization" of human brain function offers new possibilities for brain research in space. There is evolutionary and experimental evidence indicating the validity of the principle, including it's role in nervous response to gravitational stimulation. The gravitational field is considered here as one of the more constant and comprehensive factors acting on brain evolution, which has undergone some successive crucial steps: "encephalization", "corticalization", "lateralization" and "frontalization". The dominating effects of electrical responses from the frontal cortex have been discovered 1) in experiments under gravitational stimulus; and 2) in processes potentially relating to gravitational adaptation, such as memory and learning, sensory information processing, motor programing, and brain state control. A brain research experiment during space flight is suggested to test the role of the frontal cortex in space adaptation and it's potentiality in brain control.

  2. A comprehensive study of radon levels and associated radiation doses in Himalayan groundwater

    NASA Astrophysics Data System (ADS)

    Prasad, Mukesh; Kumar, G. Anil; Sahoo, B. K.; Ramola, R. C.

    2018-03-01

    The concentration of radon in groundwater is mainly governed by the radium content in the rocks of the aquifer. The internal exposure to high levels of radon in water is directly associated with the radiological risk to members of public. In this work, radon concentrations were measured in groundwater of Garhwal Himalaya, India, using scintillation detector-based RnDuo and silicon detector-based RAD7 monitors. An inter-comparison exercise was carried out between RnDuo and RAD7 techniques for a few samples to validate the results. The radiation doses associated with the exposure to radon in water were estimated from measured values of activity concentrations. An attempt has been made to see the effect of geology, geohydrology and different types of sources on radon levels in Himalayan groundwater. The experimental techniques and results obtained are discussed in detail.

  3. High Frequency Amplitude Detector for GMI Magnetic Sensors

    PubMed Central

    Asfour, Aktham; Zidi, Manel; Yonnet, Jean-Paul

    2014-01-01

    A new concept of a high-frequency amplitude detector and demodulator for Giant-Magneto-Impedance (GMI) sensors is presented. This concept combines a half wave rectifier, with outstanding capabilities and high speed, and a feedback approach that ensures the amplitude detection with easily adjustable gain. The developed detector is capable of measuring high-frequency and very low amplitude signals without the use of diode-based active rectifiers or analog multipliers. The performances of this detector are addressed throughout the paper. The full circuitry of the design is given, together with a comprehensive theoretical study of the concept and experimental validation. The detector has been used for the amplitude measurement of both single frequency and pulsed signals and for the demodulation of amplitude-modulated signals. It has also been successfully integrated in a GMI sensor prototype. Magnetic field and electrical current measurements in open- and closed-loop of this sensor have also been conducted. PMID:25536003

  4. Modeling Urban Scenarios & Experiments: Fort Indiantown Gap Data Collections Summary and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Daniel E.; Bandstra, Mark S.; Davidson, Gregory G.

    This report summarizes experimental radiation detector, contextual sensor, weather, and global positioning system (GPS) data collected to inform and validate a comprehensive, operational radiation transport modeling framework to evaluate radiation detector system and algorithm performance. This framework will be used to study the influence of systematic effects (such as geometry, background activity, background variability, environmental shielding, etc.) on detector responses and algorithm performance using synthetic time series data. This work consists of performing data collection campaigns at a canonical, controlled environment for complete radiological characterization to help construct and benchmark a high-fidelity model with quantified system geometries, detector response functions,more » and source terms for background and threat objects. This data also provides an archival, benchmark dataset that can be used by the radiation detection community. The data reported here spans four data collection campaigns conducted between May 2015 and September 2016.« less

  5. Factor Structure of the Comprehensive Trail Making Test in Children and Adolescents with Brain Dysfunction

    ERIC Educational Resources Information Center

    Allen, Daniel N.; Thaler, Nicholas S.; Barchard, Kimberly A.; Vertinski, Mary; Mayfield, Joan

    2012-01-01

    The Comprehensive Trail Making Test (CTMT) is a relatively new version of the Trail Making Test that has a number of appealing features, including a large normative sample that allows raw scores to be converted to standard "T" scores adjusted for age. Preliminary validity information suggests that CTMT scores are sensitive to brain…

  6. Comprehensive Trail Making Test Performance in Children and Adolescents with Traumatic Brain Injury

    ERIC Educational Resources Information Center

    Allen, Daniel N.; Thaler, Nicholas S.; Ringdahl, Erik N.; Barney, Sally J.; Mayfield, Joan

    2012-01-01

    The sensitivity of the Trail Making Test to brain damage has been well-established over many years, making it one of the most commonly used tests in clinical neuropsychological evaluations. The current study examined the validity of scores from a newer version of the Trail Making Test, the Comprehensive Trail Making Test (CTMT), in children and…

  7. The Differences across Distributed Leadership Practices by School Position According to the Comprehensive Assessment of Leadership for Learning (CALL)

    ERIC Educational Resources Information Center

    Blitz, Mark H.; Modeste, Marsha

    2015-01-01

    The Comprehensive Assessment of Leadership for Learning (CALL) is a multi-source assessment of distributed instructional leadership. As part of the validation of CALL, researchers examined differences between teacher and leader ratings in assessing distributed leadership practices. The authors utilized a t-test for equality of means for the…

  8. An Internal Construct Validation Study of the "Iowa Tests of Basic Skills" (Level 12, Form G) Reading Comprehension Test Items.

    ERIC Educational Resources Information Center

    Perkins, Kyle; Duncan, Ann

    An assessment analysis was performed to determine whether sets of items designed to measure three different subskills of reading comprehension of the Iowa Tests of Basic Skills (ITBSs) did, in fact, distinguish among these subskills. The three major skills objectives were: (1) facts; (2) generalizations; and (3) inferences. Data from…

  9. Evaluating the Effectiveness of a State-Mandated Benchmark Reading Assessment: mClass Reading 3D (Text Reading and Comprehension)

    ERIC Educational Resources Information Center

    Snow, Amie B.; Morris, Darrell; Perney, Jan

    2018-01-01

    We examined which of two instruments (Text Reading and Comprehension inventory [TRC] or a traditional informal reading inventory [IRI]) provides the more valid assessment of a primary-grade student's reading instructional level. The TRC is currently the required, benchmark reading assessment for students in grades K-3 in the state of North…

  10. Insights into the mechanism of X-ray-induced disulfide-bond cleavage in lysozyme crystals based on EPR, optical absorption and X-ray diffraction studies.

    PubMed

    Sutton, Kristin A; Black, Paul J; Mercer, Kermit R; Garman, Elspeth F; Owen, Robin L; Snell, Edward H; Bernhard, William A

    2013-12-01

    Electron paramagnetic resonance (EPR) and online UV-visible absorption microspectrophotometry with X-ray crystallography have been used in a complementary manner to follow X-ray-induced disulfide-bond cleavage. Online UV-visible spectroscopy showed that upon X-irradiation, disulfide radicalization appeared to saturate at an absorbed dose of approximately 0.5-0.8 MGy, in contrast to the saturating dose of ∼0.2 MGy observed using EPR at much lower dose rates. The observations suggest that a multi-track model involving product formation owing to the interaction of two separate tracks is a valid model for radiation damage in protein crystals. The saturation levels are remarkably consistent given the widely different experimental parameters and the range of total absorbed doses studied. The results indicate that even at the lowest doses used for structural investigations disulfide bonds are already radicalized. Multi-track considerations offer the first step in a comprehensive model of radiation damage that could potentially lead to a combined computational and experimental approach to identifying when damage is likely to be present, to quantitate it and to provide the ability to recover the native unperturbed structure.

  11. Combined Loads Test Fixture for Thermal-Structural Testing Aerospace Vehicle Panel Concepts

    NASA Technical Reports Server (NTRS)

    Fields, Roger A.; Richards, W. Lance; DeAngelis, Michael V.

    2004-01-01

    A structural test requirement of the National Aero-Space Plane (NASP) program has resulted in the design, fabrication, and implementation of a combined loads test fixture. Principal requirements for the fixture are testing a 4- by 4-ft hat-stiffened panel with combined axial (either tension or compression) and shear load at temperatures ranging from room temperature to 915 F, keeping the test panel stresses caused by the mechanical loads uniform, and thermal stresses caused by non-uniform panel temperatures minimized. The panel represents the side fuselage skin of an experimental aerospace vehicle, and was produced for the NASP program. A comprehensive mechanical loads test program using the new test fixture has been conducted on this panel from room temperature to 500 F. Measured data have been compared with finite-element analyses predictions, verifying that uniform load distributions were achieved by the fixture. The overall correlation of test data with analysis is excellent. The panel stress distributions and temperature distributions are very uniform and fulfill program requirements. This report provides details of an analytical and experimental validation of the combined loads test fixture. Because of its simple design, this unique test fixture can accommodate panels from a variety of aerospace vehicle designs.

  12. A framework program for the teaching of alternative methods (replacement, reduction, refinement) to animal experimentation.

    PubMed

    Daneshian, Mardas; Akbarsha, Mohammad A; Blaauboer, Bas; Caloni, Francesca; Cosson, Pierre; Curren, Rodger; Goldberg, Alan; Gruber, Franz; Ohl, Frauke; Pfaller, Walter; van der Valk, Jan; Vinardell, Pilar; Zurlo, Joanne; Hartung, Thomas; Leist, Marcel

    2011-01-01

    Development of improved communication and education strategies is important to make alternatives to the use of animals, and the broad range of applications of the 3Rs concept better known and understood by different audiences. For this purpose, the Center for Alternatives to Animal Testing in Europe (CAAT-Europe) together with the Transatlantic Think Tank for Toxicology (t(4)) hosted a three-day workshop on "Teaching Alternative Methods to Animal Experimentation". A compilation of the recommendations by a group of international specialists in the field is summarized in this report. Initially, the workshop participants identified the different audience groups to be addressed and also the communication media that may be used. The main outcome of the workshop was a framework for a comprehensive educational program. The modular structure of the teaching program presented here allows adaptation to different audiences with their specific needs; different time schedules can be easily accommodated on this basis. The topics cover the 3Rs principle, basic research, toxicological applications, method development and validation, regulatory aspects, case studies and ethical aspects of 3Rs approaches. This expert consortium agreed to generating teaching materials covering all modules and providing them in an open access online repository.

  13. Numerical analysis of stress effects on Frank loop evolution during irradiation in austenitic Fe&z.sbnd;Cr&z.sbnd;Ni alloy

    NASA Astrophysics Data System (ADS)

    Tanigawa, Hiroyasu; Katoh, Yutai; Kohyama, Akira

    1995-08-01

    Effects of applied stress on early stages of interstitial type Frank loop evolution were investigated by both numerical calculation and irradiation experiments. The final objective of this research is to propose a comprehensive model of complex stress effects on microstructural evolution under various conditions. In the experimental part of this work, the microstructural analysis revealed that the differences in resolved normal stress caused those in the nucleation rates of Frank loops on {111} crystallographic family planes, and that with increasing external applied stress the total nucleation rate of Frank loops was increased. A numerical calculation was carried out primarily to evaluate the validity of models of stress effects on nucleation processes of Frank loop evolution. The calculation stands on rate equuations which describe evolution of point defects, small points defect clusters and Frank loops. The rate equations of Frank loop evolution were formulated for {111} planes, considering effects of resolved normal stress to clustering processes of small point defects and growth processes of Frank loops, separately. The experimental results and the predictions from the numerical calculation qualitatively coincided well with each other.

  14. Study on turbulent flow and heat transfer performance of tubes with internal fins in EGR cooler

    NASA Astrophysics Data System (ADS)

    Liu, Lin; Ling, Xiang; Peng, Hao

    2015-07-01

    In this paper, flow and heat transfer performances of the tubes with internal longitudinal fins in Exhaust Gas Recirculation (EGR ) cooler were investigated by three-dimension computation and experiment . Each test tube was a single-pipe structure, without inner tube. Three-dimension computation was performed to determine the thermal characteristics difference between the two kinds of tubes, that is, the tube with an inner solid staff as a blocked structure and the tube without the blocked structure. The effects of fin width and fin height on heat transfer and flow are examined. For proving the validity of numerical method, the calculated results were compared with corresponding experimental data. The tube-side friction factor and heat transfer coefficient were examined. As a result, the maximum deviations between the numerical results and the experimental data are approximately 5.4 % for friction factor and 8.6 % for heat transfer coefficient, respectively. It is found that two types of internally finned tubes enhance significantly heat transfer. The heat transfer of the tube with blocked structure is better, while the pressure drop of the tube without blocked structure is lower. The comprehensive performance of the unblocked tube is better to applied in EGR cooler.

  15. The Paucity Problem: Where Have All the Space Reactor Experiments Gone?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Marshall, Margaret A.

    2016-10-01

    The Handbooks of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) together contain a plethora of documented and evaluated experiments essential in the validation of nuclear data, neutronics codes, and modeling of various nuclear systems. Unfortunately, only a minute selection of handbook data (twelve evaluations) are of actual experimental facilities and mockups designed specifically for space nuclear research. There is a paucity problem, such that the multitude of space nuclear experimental activities performed in the past several decades have yet to be recovered and made available in such detail that themore » international community could benefit from these valuable historical research efforts. Those experiments represent extensive investments in infrastructure, expertise, and cost, as well as constitute significantly valuable resources of data supporting past, present, and future research activities. The ICSBEP and IRPhEP were established to identify and verify comprehensive sets of benchmark data; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data. See full abstract in attached document.« less

  16. The effect of acoustically levitated objects on the dynamics of ultrasonic actuators

    NASA Astrophysics Data System (ADS)

    Ilssar, D.; Bucher, I.

    2017-03-01

    This paper presents a comprehensive model, coupling a piezoelectric actuator operating at ultrasonic frequencies to a near-field acoustically levitated object through a compressible thin layer of gas such that the combined dynamic response of the system can be predicted. The latter is derived by introducing a simplified model of the nonlinear squeezed layer of gas and a variational model of the solid structure and the piezoelectric elements. Since the harmonic forces applied by the entrapped fluid depend on the levitated object's height and vertical motion, the latter affects the impedance of the driving surface, affecting the natural frequencies, damping ratios, and amplification of the actuator. Thus, the developed model is helpful when devising a resonance tracking algorithm aimed to excite a near-field acoustic levitation based apparatus optimally. Validation of the suggested model was carried out using a focused experimental setup geared to eliminate the effects that were already verified in the past. In agreement with the model, the experimental results showed that the natural frequency and damping ratio of a designated mode decrease monotonically with the levitated object's average height, whereas the amplification of the mode increases with the levitation height.

  17. Effect of Melt Convection and Solid Transport on Macrosegregation and Grain Structure in Equiaxed Al-Cu Alloys

    NASA Technical Reports Server (NTRS)

    Rerko, Rodney S.; deGroh, Henry C., III; Beckermann, Christoph; Gray, Hugh R. (Technical Monitor)

    2002-01-01

    Macrosegregation in metal casting can be caused by thermal and solutal melt convection, and the transport of unattached solid crystals. These free grains can be a result of, for example, nucleation in the bulk liquid or dendrite fragmentation. In an effort to develop a comprehensive numerical model for the casting of alloys, an experimental study has been conducted to generate benchmark data with which such a solidification model could be tested. The specific goal of the experiments was to examine equiaxed solidification in situations where sinking of grains is (and is not) expected. The objectives were: 1) experimentally study the effects of solid transport and thermosolutal convection on macrosegregation and grain size distribution patterns; and 2) provide a complete set of controlled thermal boundary conditions, temperature data, segregation data, and grain size data, to validate numerical codes. The alloys used were Al-1 wt. pct. Cu, and Al-10 wt. pct. Cu with various amounts of the grain refiner TiB2 added. Cylindrical samples were either cooled from the top, or the bottom. Several trends in the data stand out. In attempting to model these experiments, concentrating on experiments that show clear trends or differences is recommended.

  18. Aeroelastic loads and stability investigation of a full-scale hingeless rotor

    NASA Technical Reports Server (NTRS)

    Peterson, Randall L.; Johnson, Wayne

    1991-01-01

    An analytical investigation was conducted to study the influence of various parameters on predicting the aeroelastic loads and stability of a full-scale hingeless rotor in hover and forward flight. The CAMRAD/JA (Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics, Johnson Aeronautics) analysis code is used to obtain the analytical predictions. Data are presented for rotor blade bending and torsional moments as well as inplane damping data obtained for rotor operation in hover at a constant rotor rotational speed of 425 rpm and thrust coefficients between 0.0 and 0.12. Experimental data are presented from a test in the wind tunnel. Validation of the rotor system structural model with experimental rotor blade loads data shows excellent correlation with analytical results. Using this analysis, the influence of different aerodynamic inflow models, the number of generalized blade and body degrees of freedom, and the control-system stiffness at predicted stability levels are shown. Forward flight predictions of the BO-105 rotor system for 1-G thrust conditions at advance ratios of 0.0 to 0.35 are presented. The influence of different aerodynamic inflow models, dynamic inflow models and shaft angle variations on predicted stability levels are shown as a function of advance ratio.

  19. The comprehensive care project: measuring physician performance in ambulatory practice.

    PubMed

    Holmboe, Eric S; Weng, Weifeng; Arnold, Gerald K; Kaplan, Sherrie H; Normand, Sharon-Lise; Greenfield, Sheldon; Hood, Sarah; Lipner, Rebecca S

    2010-12-01

    To investigate the feasibility, reliability, and validity of comprehensively assessing physician-level performance in ambulatory practice. Ambulatory-based general internists in 13 states participated in the assessment. We assessed physician-level performance, adjusted for patient factors, on 46 individual measures, an overall composite measure, and composite measures for chronic, acute, and preventive care. Between- versus within-physician variation was quantified by intraclass correlation coefficients (ICC). External validity was assessed by correlating performance on a certification exam. Medical records for 236 physicians were audited for seven chronic and four acute care conditions, and six age- and gender-appropriate preventive services. Performance on the individual and composite measures varied substantially within (range 5-86 percent compliance on 46 measures) and between physicians (ICC range 0.12-0.88). Reliabilities for the composite measures were robust: 0.88 for chronic care and 0.87 for preventive services. Higher certification exam scores were associated with better performance on the overall (r = 0.19; p<.01), chronic care (r = 0.14, p = .04), and preventive services composites (r = 0.17, p = .01). Our results suggest that reliable and valid comprehensive assessment of the quality of chronic and preventive care can be achieved by creating composite measures and by sampling feasible numbers of patients for each condition. © Health Research and Educational Trust.

  20. Development, Sensibility, and Validity of a Systemic Autoimmune Rheumatic Disease Case Ascertainment Tool.

    PubMed

    Armstrong, Susan M; Wither, Joan E; Borowoy, Alan M; Landolt-Marticorena, Carolina; Davis, Aileen M; Johnson, Sindhu R

    2017-01-01

    Case ascertainment through self-report is a convenient but often inaccurate method to collect information. The purposes of this study were to develop, assess the sensibility, and validate a tool to identify cases of systemic autoimmune rheumatic diseases (SARD) in the outpatient setting. The SARD tool was administered to subjects sampled from specialty clinics. Determinants of sensibility - comprehensibility, feasibility, validity, and acceptability - were evaluated using a numeric rating scale from 1-7. Comprehensibility was evaluated using the Flesch Reading Ease and the Flesch-Kincaid Grade Level. Self-reported diagnoses were validated against medical records using Cohen's κ statistic. There were 141 participants [systemic lupus erythematosus (SLE), systemic sclerosis (SSc), rheumatoid arthritis, Sjögren syndrome (SS), inflammatory myositis (polymyositis/dermatomyositis; PM/DM), and controls] who completed the questionnaire. The Flesch Reading Ease score was 77.1 and the Flesch-Kincaid Grade Level was 4.4. Respondents endorsed (mean ± SD) comprehensibility (6.12 ± 0.92), feasibility (5.94 ± 0.81), validity (5.35 ± 1.10), and acceptability (3.10 ± 2.03). The SARD tool had a sensitivity of 0.91 (95% CI 0.88-0.94) and a specificity of 0.99 (95% CI 0.96-1.00). The agreement between the SARD tool and medical record was κ = 0.82 (95% CI 0.77-0.88). Subgroup analysis by SARD found κ coefficients for SLE to be κ = 0.88 (95% CI 0.79-0.97), SSc κ = 1.0 (95% CI 1.0-1.0), PM/DM κ = 0.72 (95% CI 0.49-0.95), and SS κ = 0.85 (95% CI 0.71-0.99). The screening questions had sensitivity ranging from 0.96 to 1.0 and specificity ranging from 0.88 to 1.0. This SARD case ascertainment tool has demonstrable sensibility and validity. The use of both screening and confirmatory questions confers added accuracy.

  1. The Outpatient Experience Questionnaire of comprehensive public hospital in China: development, validity and reliability.

    PubMed

    Hu, Yinhuan; Zhang, Zixia; Xie, Jinzhu; Wang, Guanping

    2017-02-01

    The objective of this study is to describe the development of the Outpatient Experience Questionnaire (OPEQ) and to assess the validity and reliability of the scale. Literature review, patient interviews, Delphi method and Cross-sectional validation survey. Six comprehensive public hospitals in China. The survey was carried out on a sample of 600 outpatients. Acceptability of the questionnaire was assessed according to the overall response rate, item non-response rate and the average completion time. Correlation coefficients and confirmatory factor analysis were used to test construct validity. Delphi method was used to assess the content validity of the questionnaire. Cronbach's coefficient alpha and split-half reliability coefficient were used to estimate the internal reliability of the questionnaire. The overall response rate was 97.2% and the item non-response rate ranged from 0% to 0.3%. The mean completion time was 6 min. The Spearman correlations of item-total score ranged from 0.466 to 0.765. The results of confirmatory factor analysis showed that all items had factor loadings above 0.40 and the dimension intercorrelation ranged from 0.449 to 0.773, the goodness of fit of the questionnaire was reasonable. The overall authority grade of expert consultation was 0.80 and Kendall's coefficient of concordance W was 0.186. The Cronbach's coefficients alpha of six dimensions ranged from 0.708 to 0.895, the split-half reliability coefficient (Spearman-Brown coefficient) was 0.969. The OPEQ is a promising instrument covering the most important aspects which influence outpatient experiences of comprehensive public hospital in China. It has good evidence for acceptability, validity and reliability. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  2. Computation and Communication Evaluation of an Authentication Mechanism for Time-Triggered Networked Control Systems

    PubMed Central

    Martins, Goncalo; Moondra, Arul; Dubey, Abhishek; Bhattacharjee, Anirban; Koutsoukos, Xenofon D.

    2016-01-01

    In modern networked control applications, confidentiality and integrity are important features to address in order to prevent against attacks. Moreover, network control systems are a fundamental part of the communication components of current cyber-physical systems (e.g., automotive communications). Many networked control systems employ Time-Triggered (TT) architectures that provide mechanisms enabling the exchange of precise and synchronous messages. TT systems have computation and communication constraints, and with the aim to enable secure communications in the network, it is important to evaluate the computational and communication overhead of implementing secure communication mechanisms. This paper presents a comprehensive analysis and evaluation of the effects of adding a Hash-based Message Authentication (HMAC) to TT networked control systems. The contributions of the paper include (1) the analysis and experimental validation of the communication overhead, as well as a scalability analysis that utilizes the experimental result for both wired and wireless platforms and (2) an experimental evaluation of the computational overhead of HMAC based on a kernel-level Linux implementation. An automotive application is used as an example, and the results show that it is feasible to implement a secure communication mechanism without interfering with the existing automotive controller execution times. The methods and results of the paper can be used for evaluating the performance impact of security mechanisms and, thus, for the design of secure wired and wireless TT networked control systems. PMID:27463718

  3. SnoVault and encodeD: A novel object-based storage system and applications to ENCODE metadata.

    PubMed

    Hitz, Benjamin C; Rowe, Laurence D; Podduturi, Nikhil R; Glick, David I; Baymuradov, Ulugbek K; Malladi, Venkat S; Chan, Esther T; Davidson, Jean M; Gabdank, Idan; Narayana, Aditi K; Onate, Kathrina C; Hilton, Jason; Ho, Marcus C; Lee, Brian T; Miyasato, Stuart R; Dreszer, Timothy R; Sloan, Cricket A; Strattan, J Seth; Tanaka, Forrest Y; Hong, Eurie L; Cherry, J Michael

    2017-01-01

    The Encyclopedia of DNA elements (ENCODE) project is an ongoing collaborative effort to create a comprehensive catalog of functional elements initiated shortly after the completion of the Human Genome Project. The current database exceeds 6500 experiments across more than 450 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the H. sapiens and M. musculus genomes. All ENCODE experimental data, metadata, and associated computational analyses are submitted to the ENCODE Data Coordination Center (DCC) for validation, tracking, storage, unified processing, and distribution to community resources and the scientific community. As the volume of data increases, the identification and organization of experimental details becomes increasingly intricate and demands careful curation. The ENCODE DCC has created a general purpose software system, known as SnoVault, that supports metadata and file submission, a database used for metadata storage, web pages for displaying the metadata and a robust API for querying the metadata. The software is fully open-source, code and installation instructions can be found at: http://github.com/ENCODE-DCC/snovault/ (for the generic database) and http://github.com/ENCODE-DCC/encoded/ to store genomic data in the manner of ENCODE. The core database engine, SnoVault (which is completely independent of ENCODE, genomic data, or bioinformatic data) has been released as a separate Python package.

  4. SnoVault and encodeD: A novel object-based storage system and applications to ENCODE metadata

    PubMed Central

    Podduturi, Nikhil R.; Glick, David I.; Baymuradov, Ulugbek K.; Malladi, Venkat S.; Chan, Esther T.; Davidson, Jean M.; Gabdank, Idan; Narayana, Aditi K.; Onate, Kathrina C.; Hilton, Jason; Ho, Marcus C.; Lee, Brian T.; Miyasato, Stuart R.; Dreszer, Timothy R.; Sloan, Cricket A.; Strattan, J. Seth; Tanaka, Forrest Y.; Hong, Eurie L.; Cherry, J. Michael

    2017-01-01

    The Encyclopedia of DNA elements (ENCODE) project is an ongoing collaborative effort to create a comprehensive catalog of functional elements initiated shortly after the completion of the Human Genome Project. The current database exceeds 6500 experiments across more than 450 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the H. sapiens and M. musculus genomes. All ENCODE experimental data, metadata, and associated computational analyses are submitted to the ENCODE Data Coordination Center (DCC) for validation, tracking, storage, unified processing, and distribution to community resources and the scientific community. As the volume of data increases, the identification and organization of experimental details becomes increasingly intricate and demands careful curation. The ENCODE DCC has created a general purpose software system, known as SnoVault, that supports metadata and file submission, a database used for metadata storage, web pages for displaying the metadata and a robust API for querying the metadata. The software is fully open-source, code and installation instructions can be found at: http://github.com/ENCODE-DCC/snovault/ (for the generic database) and http://github.com/ENCODE-DCC/encoded/ to store genomic data in the manner of ENCODE. The core database engine, SnoVault (which is completely independent of ENCODE, genomic data, or bioinformatic data) has been released as a separate Python package. PMID:28403240

  5. Computation and Communication Evaluation of an Authentication Mechanism for Time-Triggered Networked Control Systems.

    PubMed

    Martins, Goncalo; Moondra, Arul; Dubey, Abhishek; Bhattacharjee, Anirban; Koutsoukos, Xenofon D

    2016-07-25

    In modern networked control applications, confidentiality and integrity are important features to address in order to prevent against attacks. Moreover, network control systems are a fundamental part of the communication components of current cyber-physical systems (e.g., automotive communications). Many networked control systems employ Time-Triggered (TT) architectures that provide mechanisms enabling the exchange of precise and synchronous messages. TT systems have computation and communication constraints, and with the aim to enable secure communications in the network, it is important to evaluate the computational and communication overhead of implementing secure communication mechanisms. This paper presents a comprehensive analysis and evaluation of the effects of adding a Hash-based Message Authentication (HMAC) to TT networked control systems. The contributions of the paper include (1) the analysis and experimental validation of the communication overhead, as well as a scalability analysis that utilizes the experimental result for both wired and wireless platforms and (2) an experimental evaluation of the computational overhead of HMAC based on a kernel-level Linux implementation. An automotive application is used as an example, and the results show that it is feasible to implement a secure communication mechanism without interfering with the existing automotive controller execution times. The methods and results of the paper can be used for evaluating the performance impact of security mechanisms and, thus, for the design of secure wired and wireless TT networked control systems.

  6. Characterization of Unsteady Flow Structures Near Leading-Edge Slat. Part 1; PIV Measurements

    NASA Technical Reports Server (NTRS)

    Jenkins, Luther N.; Khorrami, Mehdi R.; Choudhari, Meelan

    2004-01-01

    A comprehensive computational and experimental study has been performed at the NASA Langley Research Center as part of the Quiet Aircraft Technology (QAT) Program to investigate the unsteady flow near a leading-edge slat of a two-dimensional, high-lift system. This paper focuses on the experimental effort conducted in the NASA Langley Basic Aerodynamics Research Tunnel (BART) where Particle Image Velocimetry (PIV) data was acquired in the slat cove and at the slat trailing edge of a three-element, high-lift model at 4, 6, and 8 degrees angle of attack and a freestream Mach Number of 0.17. Instantaneous velocities obtained from PIV images are used to obtain mean and fluctuating components of velocity and vorticity. The data show the recirculation in the cove, reattachment of the shear layer on the slat lower surface, and discrete vortical structures within the shear layer emanating from the slat cusp and slat trailing edge. Detailed measurements are used to examine the shear layer formation at the slat cusp, vortex shedding at the slat trailing edge, and convection of vortical structures through the slat gap. Selected results are discussed and compared with unsteady, Reynolds-Averaged Navier-Stokes (URANS) computations for the same configuration in a companion paper by Khorrami, Choudhari, and Jenkins (2004). The experimental dataset provides essential flow-field information for the validation of near-field inputs to noise prediction tools.

  7. Using a gel/plastic surrogate to study the biomechanical response of the head under air shock loading: a combined experimental and numerical investigation.

    PubMed

    Zhu, Feng; Wagner, Christina; Dal Cengio Leonardi, Alessandra; Jin, Xin; Vandevord, Pamela; Chou, Clifford; Yang, King H; King, Albert I

    2012-03-01

    A combined experimental and numerical study was conducted to determine a method to elucidate the biomechanical response of a head surrogate physical model under air shock loading. In the physical experiments, a gel-filled egg-shaped skull/brain surrogate was exposed to blast overpressure in a shock tube environment, and static pressures within the shock tube and the surrogate were recorded throughout the event. A numerical model of the shock tube was developed using the Eulerian approach and validated against experimental data. An arbitrary Lagrangian-Eulerian (ALE) fluid-structure coupling algorithm was then utilized to simulate the interaction of the shock wave and the head surrogate. After model validation, a comprehensive series of parametric studies was carried out on the egg-shaped surrogate FE model to assess the effect of several key factors, such as the elastic modulus of the shell, bulk modulus of the core, head orientation, and internal sensor location, on pressure and strain responses. Results indicate that increasing the elastic modulus of the shell within the range simulated in this study led to considerable rise of the overpressures. Varying the bulk modulus of the core from 0.5 to 2.0 GPa, the overpressure had an increase of 7.2%. The curvature of the surface facing the shock wave significantly affected both the peak positive and negative pressures. Simulations of the head surrogate with the blunt end facing the advancing shock front had a higher pressure compared to the simulations with the pointed end facing the shock front. The influence of an opening (possibly mimicking anatomical apertures) on the peak pressures was evaluated using a surrogate head with a hole on the shell of the blunt end. It was revealed that the presence of the opening had little influence on the positive pressures but could affect the negative pressure evidently.

  8. Bend-Twist Coupled Carbon-Fiber Laminate Beams: Fundamental Behavior and Applications

    NASA Astrophysics Data System (ADS)

    Babuska, Pavel

    Material-induced bend-twist coupling in laminated composite beams has seen applications in engineered structures for decades, ranging from airplane wings to turbine blades. Symmetric, unbalanced, carbon fiber laminates which exhibit bend-twist coupling can be difficult to characterize and exhibit unintuitive deformation states which may pose challenges to the engineer. In this thesis, bend-twist coupled beams are investigated comprehensively, by experimentation, numerical modeling, and analytical methods. Beams of varying fiber angle and amount of coupling were manufactured and physically tested in both linear and nonlinear static and dynamic settings. Analytical mass and stiffness matrices were derived for the development of a beam element to use in the stiffness matrix analysis method. Additionally, an ABAQUS finite element model was used in conjunction with the analytical methods to predict and further characterize the behavior of the beams. The three regimes, experimental, analytical, and numerical, represent a full-field characterization of bend-twist coupling in composite beams. A notable application of bend-twist coupled composites is for passively adaptive turbine blades whereby the deformation coupling can be built into the blade structure to simultaneously bend and twist, thus pitching the blade into or away from the fluid flow, changing the blade angle of attack. Passive pitch adaptation has been implemented successfully in wind turbine blades, however, for marine turbine blades, the technology is still in the development phase. Bend-twist coupling has been shown numerically to be beneficial to the tidal turbine performance, however little validation has been conducted in the experimental regime. In this thesis, passively adaptive experiment scale tidal turbine blades were designed, analyzed, manufactured, and physically tested, validating the foundational numerical work. It was shown that blade forces and root moments as well as turbine thrust and power coefficients can be manipulated by inclusion of passive pitch adaption by bend-twist coupling.

  9. 75 FR 53371 - Liquefied Natural Gas Facilities: Obtaining Approval of Alternative Vapor-Gas Dispersion Models

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-31

    ... factors as the approved models, are validated by experimental test data, and receive the Administrator's... stage of the MEP involves applying the model against a database of experimental test cases including..., particularly the requirement for validation by experimental test data. That guidance is based on the MEP's...

  10. Rapid Fine Conformational Epitope Mapping Using Comprehensive Mutagenesis and Deep Sequencing*

    PubMed Central

    Kowalsky, Caitlin A.; Faber, Matthew S.; Nath, Aritro; Dann, Hailey E.; Kelly, Vince W.; Liu, Li; Shanker, Purva; Wagner, Ellen K.; Maynard, Jennifer A.; Chan, Christina; Whitehead, Timothy A.

    2015-01-01

    Knowledge of the fine location of neutralizing and non-neutralizing epitopes on human pathogens affords a better understanding of the structural basis of antibody efficacy, which will expedite rational design of vaccines, prophylactics, and therapeutics. However, full utilization of the wealth of information from single cell techniques and antibody repertoire sequencing awaits the development of a high throughput, inexpensive method to map the conformational epitopes for antibody-antigen interactions. Here we show such an approach that combines comprehensive mutagenesis, cell surface display, and DNA deep sequencing. We develop analytical equations to identify epitope positions and show the method effectiveness by mapping the fine epitope for different antibodies targeting TNF, pertussis toxin, and the cancer target TROP2. In all three cases, the experimentally determined conformational epitope was consistent with previous experimental datasets, confirming the reliability of the experimental pipeline. Once the comprehensive library is generated, fine conformational epitope maps can be prepared at a rate of four per day. PMID:26296891

  11. Flow in prosthetic heart valves: state-of-the-art and future directions.

    PubMed

    Yoganathan, Ajit P; Chandran, K B; Sotiropoulos, Fotis

    2005-12-01

    Since the first successful implantation of a prosthetic heart valve four decades ago, over 50 different designs have been developed including both mechanical and bioprosthetic valves. Today, the most widely implanted design is the mechanical bileaflet, with over 170,000 implants worldwide each year. Several different mechanical valves are currently available and many of them have good bulk forward flow hemodynamics, with lower transvalvular pressure drops, larger effective orifice areas, and fewer regions of forward flow stasis than their earlier-generation counterparts such as the ball-and-cage and tilting-disc valves. However, mechanical valve implants suffer from complications resulting from thrombus deposition and patients implanted with these valves need to be under long-term anti-coagulant therapy. In general, blood thinners are not needed with bioprosthetic implants, but tissue valves suffer from structural failure with, an average life-time of 10-12 years, before replacement is needed. Flow-induced stresses on the formed elements in blood have been implicated in thrombus initiation within the mechanical valve prostheses. Regions of stress concentration on the leaflets during the complex motion of the leaflets have been implicated with structural failure of the leaflets with bioprosthetic valves. In vivo and in vitro experimental studies have yielded valuable information on the relationship between hemodynamic stresses and the problems associated with the implants. More recently, Computational Fluid Dynamics (CFD) has emerged as a promising tool, which, alongside experimentation, can yield insights of unprecedented detail into the hemodynamics of prosthetic heart valves. For CFD to realize its full potential, however, it must rely on numerical techniques that can handle the enormous geometrical complexities of prosthetic devices with spatial and temporal resolution sufficiently high to accurately capture all hemodynamically relevant scales of motion. Such algorithms do not exist today and their development should be a major research priority. For CFD to further gain the confidence of valve designers and medical practitioners it must also undergo comprehensive validation with experimental data. Such validation requires the use of high-resolution flow measuring tools and techniques and the integration of experimental studies with CFD modeling.

  12. The Effect of Stories for Thinking on Reading and Listening Comprehension: A Case Study in Turkey

    ERIC Educational Resources Information Center

    Tok, Sükran; Mazl, Aysegül

    2015-01-01

    This study has been conducted in order to examine the effects of the stories for thinking on 5th graders' reading comprehension and listening comprehension. A pretest-post test control group quasi-experimental design was used in the study. The sample of the etstudy was composed of 74 5th graders attending public elementary schools. The data have…

  13. Comprehensive validation scheme for in situ fiber optics dissolution method for pharmaceutical drug product testing.

    PubMed

    Mirza, Tahseen; Liu, Qian Julie; Vivilecchia, Richard; Joshi, Yatindra

    2009-03-01

    There has been a growing interest during the past decade in the use of fiber optics dissolution testing. Use of this novel technology is mainly confined to research and development laboratories. It has not yet emerged as a tool for end product release testing despite its ability to generate in situ results and efficiency improvement. One potential reason may be the lack of clear validation guidelines that can be applied for the assessment of suitability of fiber optics. This article describes a comprehensive validation scheme and development of a reliable, robust, reproducible and cost-effective dissolution test using fiber optics technology. The test was successfully applied for characterizing the dissolution behavior of a 40-mg immediate-release tablet dosage form that is under development at Novartis Pharmaceuticals, East Hanover, New Jersey. The method was validated for the following parameters: linearity, precision, accuracy, specificity, and robustness. In particular, robustness was evaluated in terms of probe sampling depth and probe orientation. The in situ fiber optic method was found to be comparable to the existing manual sampling dissolution method. Finally, the fiber optic dissolution test was successfully performed by different operators on different days, to further enhance the validity of the method. The results demonstrate that the fiber optics technology can be successfully validated for end product dissolution/release testing. (c) 2008 Wiley-Liss, Inc. and the American Pharmacists Association

  14. Reliability and validity of the Incontinence Quiz-Turkish version.

    PubMed

    Kara, Kerime C; Çıtak Karakaya, İlkim; Tunalı, Nur; Karakaya, Mehmet G

    2018-01-01

    The aim of this study was to investigate the reliability and validity of the Turkish version of the Incontinence Quiz, which was developed by Branch et al. (1994), to assess women's knowledge of and attitudes toward urinary incontinence. Comprehensibility of the Turkish version of the 14-item Incontinence Quiz, which was prepared following translation-back translation procedures, was tested on a pilot group of eight women, and its internal reliability, test-retest reliability and construct validity were assessed in 150 women who attended the gynecology clinics of three hospitals in İçel, Turkey. Physical and sociodemographic characteristics and presence of incontinence complaints were also recorded. Data were analyzed at the 0.05 alpha level, using SPSS version 22. The scale had good reliability and validity. The internal reliability coefficient (Cronbach α) was 0.80, test-retest correlation coefficients were 0.83-0.94; and with regard to construct validity, Kaiser-Meyer-Olkin coefficient was 0.76 and Barlett sphericity test was 562.777 (P = 0.000). Turkish version of the Incontinence Quiz had a four-factor structure, with Eigenvalues ranging from 1.17 to 4.08. The Incontinence Quiz-Turkish version is a highly comprehensible, reliable and valid scale, which may be used to assess Turkish-speaking women's knowledge of and attitudes toward urinary incontinence. © 2017 Japan Society of Obstetrics and Gynecology.

  15. An Approach to Comprehensive and Sustainable Solar Wind Model Validation

    NASA Astrophysics Data System (ADS)

    Rastaetter, L.; MacNeice, P. J.; Mays, M. L.; Boblitt, J. M.; Wiegand, C.

    2017-12-01

    The number of models of the corona and inner heliosphere and of their updates and upgrades grows steadily, as does the number and character of the model inputs. Maintaining up to date validation of these models, in the face of this constant model evolution, is a necessary but very labor intensive activity. In the last year alone, both NASA's LWS program and the CCMC's ongoing support of model forecasting activities at NOAA SWPC have sought model validation reports on the quality of all aspects of the community's coronal and heliospheric models, including both ambient and CME related wind solutions at L1. In this presentation I will give a brief review of the community's previous model validation results of L1 wind representation. I will discuss the semi-automated web based system we are constructing at the CCMC to present comparative visualizations of all interesting aspects of the solutions from competing models.This system is designed to be easily queried to provide the essential comprehensive inputs to repeat andupdate previous validation studies and support extensions to them. I will illustrate this by demonstrating how the system is being used to support the CCMC/LWS Model Assessment Forum teams focused on the ambient and time dependent corona and solar wind, including CME arrival time and IMF Bz.I will also discuss plans to extend the system to include results from the Forum teams addressing SEP model validation.

  16. Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong, E-mail: yidong.xia@inl.gov; Wang, Chuanjin; Luo, Hong

    Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using the Hydra-TH code. -- Highlights: •We performed a comprehensive study to verify and validate the turbulence models in Hydra-TH. •Hydra-TH delivers 2nd-order grid convergence for the incompressible Navier–Stokes equations. •Hydra-TH can accurately simulate the laminar boundary layers. •Hydra-TH can accurately simulate the turbulent boundary layers with RANS turbulence models. •Hydra-TH delivers high-fidelity LES capability for simulating turbulent flows in confined space.« less

  17. The Use of Variants of the Trail Making Test in Serial Assessment: A Construct Validity Study

    ERIC Educational Resources Information Center

    Atkinson, Thomas M.; Ryan, Jeanne P.

    2008-01-01

    The construct validity of three variants of the Trail Making Test was investigated using 162 undergraduate psychology students. During a 3-week period, the Trail Making Test of the Delis-Kaplan Executive Function System, Comprehensive Trail Making Test, and Connections Task were administered in six possible orders. Using confirmatory factor…

  18. Status and plans for the ANOPP/HSR prediction system

    NASA Technical Reports Server (NTRS)

    Nolan, Sandra K.

    1992-01-01

    ANOPP is a comprehensive prediction system which was developed and validated by NASA. Because ANOPP is a system prediction program, it allows aerospace industry researchers to create trade-off studies with a variety of aircraft noise problems. The extensive validation of ANOPP allows the program results to be used as a benchmark for testing other prediction codes.

  19. Factor Analysis Methods and Validity Evidence: A Systematic Review of Instrument Development across the Continuum of Medical Education

    ERIC Educational Resources Information Center

    Wetzel, Angela Payne

    2011-01-01

    Previous systematic reviews indicate a lack of reporting of reliability and validity evidence in subsets of the medical education literature. Psychology and general education reviews of factor analysis also indicate gaps between current and best practices; yet, a comprehensive review of exploratory factor analysis in instrument development across…

  20. The Spiritual Dimensions of Psychopolitical Validity: The Case of the Clergy Sexual Abuse Crisis

    ERIC Educational Resources Information Center

    Jones, Diana L.; Dokecki, Paul R.

    2008-01-01

    In this article, the authors explore the spiritual dimensions of psychopolitical validity and use it as a lens to analyze clergy sexual abuse. The psychopolitical approach suggests a comprehensive human science methodology that invites exploration of phenomena such as spirituality and religious experience and the use of methods from a wide variety…

  1. Concurrent Validity of Wechsler Adult Intelligence Scales-Third Edition Index Score Short Forms in the Canadian Standardization Sample

    ERIC Educational Resources Information Center

    Lange, Rael T.; Iverson, Grant L.

    2008-01-01

    This study evaluated the concurrent validity of estimated Wechsler Adult Intelligence Scales-Third Edition (WAIS-III) index scores using various one- and two-subtest combinations. Participants were the Canadian WAIS-III standardization sample. Using all possible one- and two-subtest combinations, an estimated Verbal Comprehension Index (VCI), an…

  2. RELAP-7 Software Verification and Validation Plan: Requirements Traceability Matrix (RTM) Part 1 – Physics and numerical methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Yong Joon; Yoo, Jun Soo; Smith, Curtis Lee

    2015-09-01

    This INL plan comprehensively describes the Requirements Traceability Matrix (RTM) on main physics and numerical method of the RELAP-7. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7.

  3. Construct Validation of the Louisiana School Analysis Model (SAM) Instructional Staff Questionnaire

    ERIC Educational Resources Information Center

    Bray-Clark, Nikki; Bates, Reid

    2005-01-01

    The purpose of this study was to validate the Louisiana SAM Instructional Staff Questionnaire, a key component of the Louisiana School Analysis Model. The model was designed as a comprehensive evaluation tool for schools. Principle axis factoring with oblique rotation was used to uncover the underlying structure of the SISQ. (Contains 1 table.)

  4. The International AIDS Questionnaire-English Version (IAQ-E): Assessing the Validity and Reliability

    ERIC Educational Resources Information Center

    Davis, Cindy; Sloan, Melissa; MacMaster, Samuel; Hughes, Leslie

    2006-01-01

    In order to address HIV infection among college students, a comprehensive measure is needed that can be used with samples from culturally diverse populations. Therefore, this paper assessed the reliability and validity of an HIV/AIDS questionnaire that measures fours dimensions of HIV/AIDS awareness--factual knowledge, prejudice, personal risk,…

  5. Comprehensive Assessment of Emotional Disturbance: A Cross-Validation Approach

    ERIC Educational Resources Information Center

    Fisher, Emily S.; Doyon, Katie E.; Saldana, Enrique; Allen, Megan Redding

    2007-01-01

    Assessing a student for emotional disturbance is a serious and complex task given the stigma of the label and the ambiguities of the federal definition. One way that school psychologists can be more confident in their assessment results is to cross validate data from different sources using the RIOT approach (Review, Interview, Observe, Test).…

  6. Determination of Trace Elements in Uranium by HPLC-ID-ICP-MS: NTNFC Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manard, Benjamin Thomas; Wylie, Ernest Miller II; Xu, Ning

    This report covers the FY 16 effort for the HPLC-ID-ICP-MS methodology 1) sub-method validation for the group I&II elements, 2) sub-method stood-up and validation for REE, 3) sub-method development for the transition element, and 4) completion of a comprehensive SOP for three families of elements.

  7. Screening Systems and Decision Making at the Preschool Level: Application of a Comprehensive Validity Framework

    ERIC Educational Resources Information Center

    Kettler, Ryan J.; Feeney-Kettler, Kelly A.

    2011-01-01

    Universal screening is designed to be an efficient method for identifying preschool students with mental health problems, but prior to use, screening systems must be evaluated to determine their appropriateness within a specific setting. In this article, an evidence-based validity framework is applied to four screening systems for identifying…

  8. Effects of reading-oriented tasks on students' reading comprehension of geometry proof

    NASA Astrophysics Data System (ADS)

    Yang, Kai-Lin; Lin, Fou-Lai

    2012-06-01

    This study compared the effects of reading-oriented tasks and writing-oriented tasks on students' reading comprehension of geometry proof (RCGP). The reading-oriented tasks were designed with reading strategies and the idea of problem posing. The writing-oriented tasks were consistent with usual proof instruction for writing a proof and applying it. Twenty-two classes of ninth-grade students ( N = 683), aged 14 to 15 years, and 12 mathematics teachers participated in this quasi-experimental classroom study. While the experimental group was instructed to read and discuss the reading tasks in two 45-minute lessons, the control group was instructed to prove and apply the same propositions. Generalised estimating equation (GEE) method was used to compare the scores of the post-test and the delayed post-test with the pre-test scores as covariates. Results showed that the total scores of the delayed post-test of the experimental group were significantly higher than those of the control group. Furthermore, the scores of the experimental group on all facets of reading comprehension except the application facet were significantly higher than those of the control group for both the post-test and delayed post-test.

  9. Experimental investigation of an RNA sequence space

    NASA Technical Reports Server (NTRS)

    Lee, Youn-Hyung; Dsouza, Lisa; Fox, George E.

    1993-01-01

    Modern rRNAs are the historic consequence of an ongoing evolutionary exploration of a sequence space. These extant sequences belong to a special subset of the sequence space that is comprised only of those primary sequences that can validly perform the biological function(s) required of the particular RNA. If it were possible to readily identify all such valid sequences, stochastic predictions could be made about the relative likelihood of various evolutionary pathways available to an RNA. Herein an experimental system which can assess whether a particular sequence is likely to have validity as a eubacterial 5S rRNA is described. A total of ten naturally occurring, and hence known to be valid, sequences and two point mutants of unknown validity were used to test the usefulness of the approach. Nine of the ten valid sequences tested positive whereas both mutants tested as clearly defective. The tenth valid sequence gave results that would be interpreted as reflecting a borderline status were the answer not known. These results demonstrate that it is possible to experimentally determine which sequences in local regions of the sequence space are potentially valid 5S rRNAs.

  10. The Development and Validation of the Game User Experience Satisfaction Scale (GUESS).

    PubMed

    Phan, Mikki H; Keebler, Joseph R; Chaparro, Barbara S

    2016-12-01

    The aim of this study was to develop and psychometrically validate a new instrument that comprehensively measures video game satisfaction based on key factors. Playtesting is often conducted in the video game industry to help game developers build better games by providing insight into the players' attitudes and preferences. However, quality feedback is difficult to obtain from playtesting sessions without a quality gaming assessment tool. There is a need for a psychometrically validated and comprehensive gaming scale that is appropriate for playtesting and game evaluation purposes. The process of developing and validating this new scale followed current best practices of scale development and validation. As a result, a mixed-method design that consisted of item pool generation, expert review, questionnaire pilot study, exploratory factor analysis (N = 629), and confirmatory factor analysis (N = 729) was implemented. A new instrument measuring video game satisfaction, called the Game User Experience Satisfaction Scale (GUESS), with nine subscales emerged. The GUESS was demonstrated to have content validity, internal consistency, and convergent and discriminant validity. The GUESS was developed and validated based on the assessments of over 450 unique video game titles across many popular genres. Thus, it can be applied across many types of video games in the industry both as a way to assess what aspects of a game contribute to user satisfaction and as a tool to aid in debriefing users on their gaming experience. The GUESS can be administered to evaluate user satisfaction of different types of video games by a variety of users. © 2016, Human Factors and Ergonomics Society.

  11. Improving English Reading Comprehension Ability through Survey, Questions, Read, Record, Recite, Review Strategy (SQ4R)

    ERIC Educational Resources Information Center

    Khusniyah, Nurul Lailatul; Lustyantie, Ninuk

    2017-01-01

    The aim of this study is to examine the effect of the survey, questions, read, record, recite, review (SQ4R) strategy of the reading comprehension ability students of 2nd semester. The research study was used action research method. The sampling was taken by 34 students. The validity of data used credibility, transferability, dependability, and…

  12. Use of the NBME Comprehensive Basic Science Examination as a Progress Test in the Preclerkship Curriculum of a New Medical School

    ERIC Educational Resources Information Center

    Johnson, Teresa R.; Khalil, Mohammed K.; Peppler, Richard D.; Davey, Diane D.; Kibble, Jonathan D.

    2014-01-01

    In the present study, we describe the innovative use of the National Board of Medical Examiners (NBME) Comprehensive Basic Science Examination (CBSE) as a progress test during the preclerkship medical curriculum. The main aim of this study was to provide external validation of internally developed multiple-choice assessments in a new medical…

  13. Relations between CBM (Oral Reading and Maze) and Reading Comprehension on State Achievement Tests: A Meta-Analysis

    ERIC Educational Resources Information Center

    Shin, Jaehyun

    2017-01-01

    The purpose of this study was to examine the validity of two widely used Curriculum-Based Measurement (CBM) in reading--oral reading and maze task--in relation to reading comprehension on state tests using a meta-analysis. A total of 61 studies (132 correlations) were identified across Grades 1 to 10. A random-effects meta-analysis was conducted…

  14. Experimental Validation and Combustion Modeling of a JP-8 Surrogate in a Single Cylinder Diesel Engine

    DTIC Science & Technology

    2014-04-15

    SINGLE CYLINDER DIESEL ENGINE Amit Shrestha, Umashankar Joshi, Ziliang Zheng, Tamer Badawy, Naeim A. Henein, Wayne State University, Detroit, MI, USA...13-03-2014 4. TITLE AND SUBTITLE EXPERIMENTAL VALIDATION AND COMBUSTION MODELING OF A JP-8 SURROGATE IN A SINGLE CYLINDER DIESEL ENGINE 5a...INTERNATIONAL UNCLASSIFIED • Validate a two-component JP-8 surrogate in a single cylinder diesel engine. Validation parameters include – Ignition delay

  15. The James Supportive Care Screening: integrating science and practice to meet the NCCN guidelines for distress management at a Comprehensive Cancer Center.

    PubMed

    Wells-Di Gregorio, Sharla; Porensky, Emily K; Minotti, Matthew; Brown, Susan; Snapp, Janet; Taylor, Robert M; Adolph, Michael D; Everett, Sherman; Lowther, Kenneth; Callahan, Kelly; Streva, Devita; Heinke, Vicki; Leno, Debra; Flower, Courtney; McVey, Anne; Andersen, Barbara Lee

    2013-09-01

    Selecting a measure for oncology distress screening can be challenging. The measure must be brief, but comprehensive, capturing patients' most distressing concerns. The measure must provide meaningful coverage of multiple domains, assess symptom and problem-related distress, and ideally be suited for both clinical and research purposes. From March 2006 to August 2012, the James Supportive Care Screening (SCS) was developed and validated in three phases including content validation, factor analysis, and measure validation. Exploratory factor analyses were completed with 596 oncology patients followed by a confirmatory factor analysis with 477 patients. Six factors were identified and confirmed including (i) emotional concerns; (ii) physical symptoms; (iii) social/practical problems; (iv) spiritual problems; (v) cognitive concerns; and (vi) healthcare decision making/communication issues. Subscale evaluation reveals good to excellent internal consistency, test-retest reliability, and convergent, divergent, and predictive validity. Specificity of individual items was 0.90 and 0.87, respectively, for identifying patients with DSM-IV-TR diagnoses of major depression and generalized anxiety disorder. Results support use of the James SCS to quickly detect the most frequent and distressing symptoms and concerns of cancer patients. The James SCS is an efficient, reliable, and valid clinical and research outcomes measure. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Comprehension of Written Grammar Test: Reliability and Known-Groups Validity Study With Hearing and Deaf and Hard-of-Hearing Students.

    PubMed

    Cannon, Joanna E; Hubley, Anita M; Millhoff, Courtney; Mazlouman, Shahla

    2016-01-01

    The aim of the current study was to gather validation evidence for the Comprehension of Written Grammar (CWG; Easterbrooks, 2010) receptive test of 26 grammatical structures of English print for use with children who are deaf and hard of hearing (DHH). Reliability and validity data were collected for 98 participants (49 DHH and 49 hearing) in Grades 2-6. The objectives were to: (a) examine 4-week test-retest reliability data; and (b) provide evidence of known-groups validity by examining expected differences between the groups on the CWG vocabulary pretest and main test, as well as selected structures. Results indicated excellent test-retest reliability estimates for CWG test scores. DHH participants performed statistically significantly lower on the CWG vocabulary pretest and main test than the hearing participants. Significantly lower performance by DHH participants on most expected grammatical structures (e.g., basic sentence patterns, auxiliary "be" singular/plural forms, tense, comparatives, and complementation) also provided known groups evidence. Overall, the findings of this study showed strong evidence of the reliability of scores and known group-based validity of inferences made from the CWG. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Validation of sterilizing grade filtration.

    PubMed

    Jornitz, M W; Meltzer, T H

    2003-01-01

    Validation consideration of sterilizing grade filters, namely 0.2 micron, changed when FDA voiced concerns about the validity of Bacterial Challenge tests performed in the past. Such validation exercises are nowadays considered to be filter qualification. Filter validation requires more thorough analysis, especially Bacterial Challenge testing with the actual drug product under process conditions. To do so, viability testing is a necessity to determine the Bacterial Challenge test methodology. Additionally to these two compulsory tests, other evaluations like extractable, adsorption and chemical compatibility tests should be considered. PDA Technical Report # 26, Sterilizing Filtration of Liquids, describes all parameters and aspects required for the comprehensive validation of filters. The report is a most helpful tool for validation of liquid filters used in the biopharmaceutical industry. It sets the cornerstones of validation requirements and other filtration considerations.

  18. Modeling biomass gasification in circulating fluidized beds

    NASA Astrophysics Data System (ADS)

    Miao, Qi

    In this thesis, the modeling of biomass gasification in circulating fluidized beds was studied. The hydrodynamics of a circulating fluidized bed operating on biomass particles were first investigated, both experimentally and numerically. Then a comprehensive mathematical model was presented to predict the overall performance of a 1.2 MWe biomass gasification and power generation plant. A sensitivity analysis was conducted to test its response to several gasifier operating conditions. The model was validated using the experimental results obtained from the plant and two other circulating fluidized bed biomass gasifiers (CFBBGs). Finally, an ASPEN PLUS simulation model of biomass gasification was presented based on minimization of the Gibbs free energy of the reaction system at chemical equilibrium. Hydrodynamics plays a crucial role in defining the performance of gas-solid circulating fluidized beds (CFBs). A 2-dimensional mathematical model was developed considering the hydrodynamic behavior of CFB gasifiers. In the modeling, the CFB riser was divided into two regions: a dense region at the bottom and a dilute region at the top of the riser. Kunii and Levenspiel (1991)'s model was adopted to express the vertical solids distribution with some other assumptions. Radial distributions of bed voidage were taken into account in the upper zone by using Zhang et al. (1991)'s correlation. For model validation purposes, a cold model CFB was employed, in which sawdust was transported with air as the fluidizing agent. A comprehensive mathematical model was developed to predict the overall performance of a 1.2 MWe biomass gasification and power generation demonstration plant in China. Hydrodynamics as well as chemical reaction kinetics were considered. The fluidized bed riser was divided into two distinct sections: (a) a dense region at the bottom of the bed where biomass undergoes mainly heterogeneous reactions and (b) a dilute region at the top where most of homogeneous reactions occur in gas phase. Each section was divided into a number of small cells, over which mass and energy balances were applied. Due to the high heating rate in circulating fluidized bed, the pyrolysis was considered instantaneous. A number of homogeneous and heterogeneous reactions were considered in the model. Mass transfer resistance was considered negligible since the reactions were under kinetic control due to good gas-solid mixing. The model is capable of predicting the bed temperature distribution along the gasifier, the concentration and distribution of each species in the vertical direction of the bed, the composition and lower heating value (LHV) of produced gas, the gasification efficiency, the overall carbon conversion and the produced gas production rate. A sensitivity analysis was performed to test its response to several gasifier operating conditions. The model sensitivity analysis showed that equivalence ratio (ER), bed temperature, fluidization velocity, biomass feed rate and moisture content had various effects on the gasifier performance. However, the model was more sensitive to variations in ER and bed temperature. The model was validated using the experimental results obtained from the demonstration plant. The reactor was operated on rice husk at various ERs, fluidization velocities and biomass feed rates. The model gave reasonable predictions. The model was also validated by comparing the simulation results with two other different size CFBBGs using different biomass feedstock, and it was concluded that the developed model can be applied to other CFBBGs using various biomass fuels and having comparable reactor geometries. A thermodynamic model was developed under ASPEN PLUS environment. Using the approach of Gibbs free energy minimization, the model was essentially independent of kinetic parameters. A sensitivity analysis was performed on the model to test its response to operating variables, including ER and biomass moisture content. The results showed that the ER has the most effect on the product gas composition and LHV. The simulation results were compared with the experimental data obtained from the demonstration plant. Keywords: Biomass gasification; Mathematical model; Circulating fluidized bed; Hydrodynamics; Kinetics; Sensitivity analysis; Validation; Equivalence ratio; Temperature; Feed rate; Moisture; Syngas composition; Lower heating value; Gasification efficiency; Carbon conversion

  19. A Pilot Study of Biomedical Text Comprehension using an Attention-Based Deep Neural Reader: Design and Experimental Analysis.

    PubMed

    Kim, Seongsoon; Park, Donghyeon; Choi, Yonghwa; Lee, Kyubum; Kim, Byounggun; Jeon, Minji; Kim, Jihye; Tan, Aik Choon; Kang, Jaewoo

    2018-01-05

    With the development of artificial intelligence (AI) technology centered on deep-learning, the computer has evolved to a point where it can read a given text and answer a question based on the context of the text. Such a specific task is known as the task of machine comprehension. Existing machine comprehension tasks mostly use datasets of general texts, such as news articles or elementary school-level storybooks. However, no attempt has been made to determine whether an up-to-date deep learning-based machine comprehension model can also process scientific literature containing expert-level knowledge, especially in the biomedical domain. This study aims to investigate whether a machine comprehension model can process biomedical articles as well as general texts. Since there is no dataset for the biomedical literature comprehension task, our work includes generating a large-scale question answering dataset using PubMed and manually evaluating the generated dataset. We present an attention-based deep neural model tailored to the biomedical domain. To further enhance the performance of our model, we used a pretrained word vector and biomedical entity type embedding. We also developed an ensemble method of combining the results of several independent models to reduce the variance of the answers from the models. The experimental results showed that our proposed deep neural network model outperformed the baseline model by more than 7% on the new dataset. We also evaluated human performance on the new dataset. The human evaluation result showed that our deep neural model outperformed humans in comprehension by 22% on average. In this work, we introduced a new task of machine comprehension in the biomedical domain using a deep neural model. Since there was no large-scale dataset for training deep neural models in the biomedical domain, we created the new cloze-style datasets Biomedical Knowledge Comprehension Title (BMKC_T) and Biomedical Knowledge Comprehension Last Sentence (BMKC_LS) (together referred to as BioMedical Knowledge Comprehension) using the PubMed corpus. The experimental results showed that the performance of our model is much higher than that of humans. We observed that our model performed consistently better regardless of the degree of difficulty of a text, whereas humans have difficulty when performing biomedical literature comprehension tasks that require expert level knowledge. ©Seongsoon Kim, Donghyeon Park, Yonghwa Choi, Kyubum Lee, Byounggun Kim, Minji Jeon, Jihye Kim, Aik Choon Tan, Jaewoo Kang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 05.01.2018.

  20. Comprehensive analysis of transport aircraft flight performance

    NASA Astrophysics Data System (ADS)

    Filippone, Antonio

    2008-04-01

    This paper reviews the state-of-the art in comprehensive performance codes for fixed-wing aircraft. The importance of system analysis in flight performance is discussed. The paper highlights the role of aerodynamics, propulsion, flight mechanics, aeroacoustics, flight operation, numerical optimisation, stochastic methods and numerical analysis. The latter discipline is used to investigate the sensitivities of the sub-systems to uncertainties in critical state parameters or functional parameters. The paper discusses critically the data used for performance analysis, and the areas where progress is required. Comprehensive analysis codes can be used for mission fuel planning, envelope exploration, competition analysis, a wide variety of environmental studies, marketing analysis, aircraft certification and conceptual aircraft design. A comprehensive program that uses the multi-disciplinary approach for transport aircraft is presented. The model includes a geometry deck, a separate engine input deck with the main parameters, a database of engine performance from an independent simulation, and an operational deck. The comprehensive code has modules for deriving the geometry from bitmap files, an aerodynamics model for all flight conditions, a flight mechanics model for flight envelopes and mission analysis, an aircraft noise model and engine emissions. The model is validated at different levels. Validation of the aerodynamic model is done against the scale models DLR-F4 and F6. A general model analysis and flight envelope exploration are shown for the Boeing B-777-300 with GE-90 turbofan engines with intermediate passenger capacity (394 passengers in 2 classes). Validation of the flight model is done by sensitivity analysis on the wetted area (or profile drag), on the specific air range, the brake-release gross weight and the aircraft noise. A variety of results is shown, including specific air range charts, take-off weight-altitude charts, payload-range performance, atmospheric effects, economic Mach number and noise trajectories at F.A.R. landing points.

Top