Methodological convergence of program evaluation designs.
Chacón-Moscoso, Salvador; Anguera, M Teresa; Sanduvete-Chaves, Susana; Sánchez-Martín, Milagrosa
2014-01-01
Nowadays, the confronting dichotomous view between experimental/quasi-experimental and non-experimental/ethnographic studies still exists but, despite the extensive use of non-experimental/ethnographic studies, the most systematic work on methodological quality has been developed based on experimental and quasi-experimental studies. This hinders evaluators and planners' practice of empirical program evaluation, a sphere in which the distinction between types of study is changing continually and is less clear. Based on the classical validity framework of experimental/quasi-experimental studies, we carry out a review of the literature in order to analyze the convergence of design elements in methodological quality in primary studies in systematic reviews and ethnographic research. We specify the relevant design elements that should be taken into account in order to improve validity and generalization in program evaluation practice in different methodologies from a practical methodological and complementary view. We recommend ways to improve design elements so as to enhance validity and generalization in program evaluation practice.
Patterson, P Daniel; Weaver, Matthew D; Fabio, Anthony; Teasley, Ellen M; Renn, Megan L; Curtis, Brett R; Matthews, Margaret E; Kroemer, Andrew J; Xun, Xiaoshuang; Bizhanova, Zhadyra; Weiss, Patricia M; Sequeira, Denisse J; Coppler, Patrick J; Lang, Eddy S; Higgins, J Stephen
2018-02-15
This study sought to systematically search the literature to identify reliable and valid survey instruments for fatigue measurement in the Emergency Medical Services (EMS) occupational setting. A systematic review study design was used and searched six databases, including one website. The research question guiding the search was developed a priori and registered with the PROSPERO database of systematic reviews: "Are there reliable and valid instruments for measuring fatigue among EMS personnel?" (2016:CRD42016040097). The primary outcome of interest was criterion-related validity. Important outcomes of interest included reliability (e.g., internal consistency), and indicators of sensitivity and specificity. Members of the research team independently screened records from the databases. Full-text articles were evaluated by adapting the Bolster and Rourke system for categorizing findings of systematic reviews, and the rated data abstracted from the body of literature as favorable, unfavorable, mixed/inconclusive, or no impact. The Grading of Recommendations, Assessment, Development and Evaluation (GRADE) methodology was used to evaluate the quality of evidence. The search strategy yielded 1,257 unique records. Thirty-four unique experimental and non-experimental studies were determined relevant following full-text review. Nineteen studies reported on the reliability and/or validity of ten different fatigue survey instruments. Eighteen different studies evaluated the reliability and/or validity of four different sleepiness survey instruments. None of the retained studies reported sensitivity or specificity. Evidence quality was rated as very low across all outcomes. In this systematic review, limited evidence of the reliability and validity of 14 different survey instruments to assess the fatigue and/or sleepiness status of EMS personnel and related shift worker groups was identified.
2016-05-24
experimental data. However, the time and length scales, and energy deposition rates in the canonical laboratory flames that have been studied over the...is to obtain high-fidelity experimental data critically needed to validate research codes at relevant conditions, and to develop systematic and...validated with experimental data. However, the time and length scales, and energy deposition rates in the canonical laboratory flames that have been
NASA Astrophysics Data System (ADS)
Christiansen, Rasmus E.; Sigmund, Ole
2016-09-01
This Letter reports on the experimental validation of a two-dimensional acoustic hyperbolic metamaterial slab optimized to exhibit negative refractive behavior. The slab was designed using a topology optimization based systematic design method allowing for tailoring the refractive behavior. The experimental results confirm the predicted refractive capability as well as the predicted transmission at an interface. The study simultaneously provides an estimate of the attenuation inside the slab stemming from the boundary layer effects—insight which can be utilized in the further design of the metamaterial slabs. The capability of tailoring the refractive behavior opens possibilities for different applications. For instance, a slab exhibiting zero refraction across a wide angular range is capable of funneling acoustic energy through it, while a material exhibiting the negative refractive behavior across a wide angular range provides lensing and collimating capabilities.
A Systematic Method for Verification and Validation of Gyrokinetic Microstability Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bravenec, Ronald
My original proposal for the period Feb. 15, 2014 through Feb. 14, 2017 called for an integrated validation and verification effort carried out by myself with collaborators. The validation component would require experimental profile and power-balance analysis. In addition, it would require running the gyrokinetic codes varying the input profiles within experimental uncertainties to seek agreement with experiment before discounting a code as invalidated. Therefore, validation would require a major increase of effort over my previous grant periods which covered only code verification (code benchmarking). Consequently, I had requested full-time funding. Instead, I am being funded at somewhat less thanmore » half time (5 calendar months per year). As a consequence, I decided to forego the validation component and to only continue the verification efforts.« less
NASA Astrophysics Data System (ADS)
Kartalov, Emil P.; Scherer, Axel; Quake, Stephen R.; Taylor, Clive R.; Anderson, W. French
2007-03-01
A systematic experimental study and theoretical modeling of the device physics of polydimethylsiloxane "pushdown" microfluidic valves are presented. The phase space is charted by 1587 dimension combinations and encompasses 45-295μm lateral dimensions, 16-39μm membrane thickness, and 1-28psi closing pressure. Three linear models are developed and tested against the empirical data, and then combined into a fourth-power-polynomial superposition. The experimentally validated final model offers a useful quantitative prediction for a valve's properties as a function of its dimensions. Typical valves (80-150μm width) are shown to behave like thin springs.
Validation and upgrading of physically based mathematical models
NASA Technical Reports Server (NTRS)
Duval, Ronald
1992-01-01
The validation of the results of physically-based mathematical models against experimental results was discussed. Systematic techniques are used for: (1) isolating subsets of the simulator mathematical model and comparing the response of each subset to its experimental response for the same input conditions; (2) evaluating the response error to determine whether it is the result of incorrect parameter values, incorrect structure of the model subset, or unmodeled external effects of cross coupling; and (3) modifying and upgrading the model and its parameter values to determine the most physically appropriate combination of changes.
Improving the governance of patient safety in emergency care: a systematic review of interventions
Hesselink, Gijs; Berben, Sivera; Beune, Thimpe
2016-01-01
Objectives To systematically review interventions that aim to improve the governance of patient safety within emergency care on effectiveness, reliability, validity and feasibility. Design A systematic review of the literature. Methods PubMed, EMBASE, Cumulative Index to Nursing and Allied Health Literature, the Cochrane Database of Systematic Reviews and PsychInfo were searched for studies published between January 1990 and July 2014. We included studies evaluating interventions relevant for higher management to oversee and manage patient safety, in prehospital emergency medical service (EMS) organisations and hospital-based emergency departments (EDs). Two reviewers independently selected candidate studies, extracted data and assessed study quality. Studies were categorised according to study quality, setting, sample, intervention characteristics and findings. Results Of the 18 included studies, 13 (72%) were non-experimental. Nine studies (50%) reported data on the reliability and/or validity of the intervention. Eight studies (44%) reported on the feasibility of the intervention. Only 4 studies (22%) reported statistically significant effects. The use of a simulation-based training programme and well-designed incident reporting systems led to a statistically significant improvement of safety knowledge and attitudes by ED staff and an increase of incident reports within EDs, respectively. Conclusions Characteristics of the interventions included in this review (eg, anonymous incident reporting and validation of incident reports by an independent party) could provide useful input for the design of an effective tool to govern patient safety in EMS organisations and EDs. However, executives cannot rely on a robust set of evidence-based and feasible tools to govern patient safety within their emergency care organisation and in the chain of emergency care. Established strategies from other high-risk sectors need to be evaluated in emergency care settings, using an experimental design with valid outcome measures to strengthen the evidence base. PMID:26826151
[The use of systematic review to develop a self-management program for CKD].
Lee, Yu-Chin; Wu, Shu-Fang Vivienne; Lee, Mei-Chen; Chen, Fu-An; Yao, Yen-Hong; Wang, Chin-Ling
2014-12-01
Chronic kidney disease (CKD) has become a public health issue of international concern due to its high prevalence. The concept of self-management has been comprehensively applied in education programs that address chronic diseases. In recent years, many studies have used self-management programs in CKD interventions and have investigated the pre- and post-intervention physiological and psychological effectiveness of this approach. However, a complete clinical application program in the self-management model has yet to be developed for use in clinical renal care settings. A systematic review is used to develop a self-management program for CKD. Three implementation steps were used in this study. These steps include: (1) A systematic literature search and review using databases including CEPS (Chinese Electronic Periodical Services) of Airiti, National Digital Library of Theses and Dissertations in Taiwan, CINAHL, Pubmed, Medline, Cochrane Library, and Joanna Briggs Institute. A total of 22 studies were identified as valid and submitted to rigorous analysis. Of these, 4 were systematic literature reviews, 10 were randomized experimental studies, and 8 were non-randomized experimental studies. (2) Empirical evidence then was used to draft relevant guidelines on clinical application. (3) Finally, expert panels tested the validity of the draft to ensure the final version was valid for application in practice. This study designed a self-management program for CKD based on the findings of empirical studies. The content of this program included: design principles, categories, elements, and the intervention measures used in the self-management program. This program and then was assessed using the content validity index (CVI) and a four-point Liker's scale. The content validity score was .98. The guideline of self-management program to CKD was thus developed. This study developed a self-management program applicable to local care of CKD. It is hoped that the guidelines developed in this study offer a reference for clinical caregivers to improve their healthcare practices.
ERIC Educational Resources Information Center
Barton, Erin E.; Pustejovsky, James E.; Maggin, Daniel M.; Reichow, Brian
2017-01-01
The adoption of methods and strategies validated through rigorous, experimentally oriented research is a core professional value of special education. We conducted a systematic review and meta-analysis examining the experimental literature on Technology-Aided Instruction and Intervention (TAII) using research identified as part of the National…
Zhuang, Jinda; Ju, Y Sungtaek
2015-09-22
The deformation and rupture of axisymmetric liquid bridges being stretched between two fully wetted coaxial disks are studied experimentally and theoretically. We numerically solve the time-dependent Navier-Stokes equations while tracking the deformation of the liquid-air interface using the arbitrary Lagrangian-Eulerian (ALE) moving mesh method to fully account for the effects of inertia and viscous forces on bridge dynamics. The effects of the stretching velocity, liquid properties, and liquid volume on the dynamics of liquid bridges are systematically investigated to provide direct experimental validation of our numerical model for stretching velocities as high as 3 m/s. The Ohnesorge number (Oh) of liquid bridges is a primary factor governing the dynamics of liquid bridge rupture, especially the dependence of the rupture distance on the stretching velocity. The rupture distance generally increases with the stretching velocity, far in excess of the static stability limit. For bridges with low Ohnesorge numbers, however, the rupture distance stay nearly constant or decreases with the stretching velocity within certain velocity windows due to the relative rupture position switching and the thread shape change. Our work provides an experimentally validated modeling approach and experimental data to help establish foundation for systematic further studies and applications of liquid bridges.
Roles of Naturalistic Observation in Comparative Psychology
ERIC Educational Resources Information Center
Miller, David B.
1977-01-01
"Five roles are considered by which systematic, quantified field research can augment controlled laboratory experimentation in terms of increasing the validity of laboratory studies." Advocates that comparative psychologists should "take more initiative in designing, executing, and interpreting our experiments with regard to the natural history of…
Kobayashi, T.; Itoh, K.; Ido, T.; Kamiya, K.; Itoh, S.-I.; Miura, Y.; Nagashima, Y.; Fujisawa, A.; Inagaki, S.; Ida, K.; Hoshino, K.
2016-01-01
Self-regulation between structure and turbulence, which is a fundamental process in the complex system, has been widely regarded as one of the central issues in modern physics. A typical example of that in magnetically confined plasmas is the Low confinement mode to High confinement mode (L-H) transition, which is intensely studied for more than thirty years since it provides a confinement improvement necessary for the realization of the fusion reactor. An essential issue in the L-H transition physics is the mechanism of the abrupt “radial” electric field generation in toroidal plasmas. To date, several models for the L-H transition have been proposed but the systematic experimental validation is still challenging. Here we report the systematic and quantitative model validations of the radial electric field excitation mechanism for the first time, using a data set of the turbulence and the radial electric field having a high spatiotemporal resolution. Examining time derivative of Poisson’s equation, the sum of the loss-cone loss current and the neoclassical bulk viscosity current is found to behave as the experimentally observed radial current that excites the radial electric field within a few factors of magnitude. PMID:27489128
NASA Astrophysics Data System (ADS)
Pitts, James Daniel
Rotary ultrasonic machining (RUM), a hybrid process combining ultrasonic machining and diamond grinding, was created to increase material removal rates for the fabrication of hard and brittle workpieces. The objective of this research was to experimentally derive empirical equations for the prediction of multiple machined surface roughness parameters for helically pocketed rotary ultrasonic machined Zerodur glass-ceramic workpieces by means of a systematic statistical experimental approach. A Taguchi parametric screening design of experiments was employed to systematically determine the RUM process parameters with the largest effect on mean surface roughness. Next empirically determined equations for the seven common surface quality metrics were developed via Box-Behnken surface response experimental trials. Validation trials were conducted resulting in predicted and experimental surface roughness in varying levels of agreement. The reductions in cutting force and tool wear associated with RUM, reported by previous researchers, was experimentally verified to also extended to helical pocketing of Zerodur glass-ceramic.
Collins, Anne; Ross, Janine
2017-01-01
We performed a systematic review to identify all original publications describing the asymmetric inheritance of cellular organelles in normal animal eukaryotic cells and to critique the validity and imprecision of the evidence. Searches were performed in Embase, MEDLINE and Pubmed up to November 2015. Screening of titles, abstracts and full papers was performed by two independent reviewers. Data extraction and validity were performed by one reviewer and checked by a second reviewer. Study quality was assessed using the SYRCLE risk of bias tool, for animal studies and by developing validity tools for the experimental model, organelle markers and imprecision. A narrative data synthesis was performed. We identified 31 studies (34 publications) of the asymmetric inheritance of organelles after mitotic or meiotic division. Studies for the asymmetric inheritance of centrosomes (n = 9); endosomes (n = 6), P granules (n = 4), the midbody (n = 3), mitochondria (n = 3), proteosomes (n = 2), spectrosomes (n = 2), cilia (n = 2) and endoplasmic reticulum (n = 2) were identified. Asymmetry was defined and quantified by variable methods. Assessment of the statistical reliability of the results indicated only two studies (7%) were judged to have low concern, the majority of studies (77%) were 'unclear' and five (16%) were judged to have 'high concerns'; the main reasons were low technical repeats (<10). Assessment of model validity indicated that the majority of studies (61%) were judged to be valid, ten studies (32%) were unclear and two studies (7%) were judged to have 'high concerns'; both described 'stem cells' without providing experimental evidence to confirm this (pluripotency and self-renewal). Assessment of marker validity indicated that no studies had low concern, most studies were unclear (96.5%), indicating there were insufficient details to judge if the markers were appropriate. One study had high concern for marker validity due to the contradictory results of two markers for the same organelle. For most studies the validity and imprecision of results could not be confirmed. In particular, data were limited due to a lack of reporting of interassay variability, sample size calculations, controls and functional validation of organelle markers. An evaluation of 16 systematic reviews containing cell assays found that only 50% reported adherence to PRISMA or ARRIVE reporting guidelines and 38% reported a formal risk of bias assessment. 44% of the reviews did not consider how relevant or valid the models were to the research question. 75% reviews did not consider how valid the markers were. 69% of reviews did not consider the impact of the statistical reliability of the results. Future systematic reviews in basic or preclinical research should ensure the rigorous reporting of the statistical reliability of the results in addition to the validity of the methods. Increased awareness of the importance of reporting guidelines and validation tools is needed for the scientific community. PMID:28562636
Modeling motivated misreports to sensitive survey questions.
Böckenholt, Ulf
2014-07-01
Asking sensitive or personal questions in surveys or experimental studies can both lower response rates and increase item non-response and misreports. Although non-response is easily diagnosed, misreports are not. However, misreports cannot be ignored because they give rise to systematic bias. The purpose of this paper is to present a modeling approach that identifies misreports and corrects for them. Misreports are conceptualized as a motivated process under which respondents edit their answers before they report them. For example, systematic bias introduced by overreports of socially desirable behaviors or underreports of less socially desirable ones can be modeled, leading to more-valid inferences. The proposed approach is applied to a large-scale experimental study and shows that respondents who feel powerful tend to overclaim their knowledge.
A combined computational-experimental analyses of selected metabolic enzymes in Pseudomonas species.
Perumal, Deepak; Lim, Chu Sing; Chow, Vincent T K; Sakharkar, Kishore R; Sakharkar, Meena K
2008-09-10
Comparative genomic analysis has revolutionized our ability to predict the metabolic subsystems that occur in newly sequenced genomes, and to explore the functional roles of the set of genes within each subsystem. These computational predictions can considerably reduce the volume of experimental studies required to assess basic metabolic properties of multiple bacterial species. However, experimental validations are still required to resolve the apparent inconsistencies in the predictions by multiple resources. Here, we present combined computational-experimental analyses on eight completely sequenced Pseudomonas species. Comparative pathway analyses reveal that several pathways within the Pseudomonas species show high plasticity and versatility. Potential bypasses in 11 metabolic pathways were identified. We further confirmed the presence of the enzyme O-acetyl homoserine (thiol) lyase (EC: 2.5.1.49) in P. syringae pv. tomato that revealed inconsistent annotations in KEGG and in the recently published SYSTOMONAS database. These analyses connect and integrate systematic data generation, computational data interpretation, and experimental validation and represent a synergistic and powerful means for conducting biological research.
Moored offshore structures - evaluation of forces in elastic mooring lines
NASA Astrophysics Data System (ADS)
Crudu, L.; Obreja, D. C.; Marcu, O.
2016-08-01
In most situations, the high frequency motions of the floating structure induce important effects in the mooring lines which affect also the motions of the structure. The experience accumulated during systematic experimental tests and calculations, carried out for different moored floating structures, showed a complex influence of various parameters on the dynamic effects. Therefore, it was considered that a systematic investigation is necessary. Due to the complexity of hydrodynamics aspects of offshore structures behaviour, experimental tests are practically compulsory in order to be able to properly evaluate and then to validate their behaviour in real sea. Moreover the necessity to carry out hydrodynamic tests is often required by customers, classification societies and other regulatory bodies. Consequently, the correct simulation of physical properties of the complex scaled models becomes a very important issue. The paper is investigating such kind of problems identifying the possible simplification, generating different approaches. One of the bases of the evaluation has been found consideringtheresults of systematic experimental tests on the dynamic behaviour of a mooring chain reproduced at five different scales. Dynamic effects as well as the influences of the elasticity simulation for 5 different scales are evaluated together. The paper presents systematic diagrams and practical results for a typical moored floating structure operating as pipe layer based on motion evaluations and accelerations in waves.
Brooks, Mark A; Gewartowski, Kamil; Mitsiki, Eirini; Létoquart, Juliette; Pache, Roland A; Billier, Ysaline; Bertero, Michela; Corréa, Margot; Czarnocki-Cieciura, Mariusz; Dadlez, Michal; Henriot, Véronique; Lazar, Noureddine; Delbos, Lila; Lebert, Dorothée; Piwowarski, Jan; Rochaix, Pascal; Böttcher, Bettina; Serrano, Luis; Séraphin, Bertrand; van Tilbeurgh, Herman; Aloy, Patrick; Perrakis, Anastassis; Dziembowski, Andrzej
2010-09-08
For high-throughput structural studies of protein complexes of composition inferred from proteomics data, it is crucial that candidate complexes are selected accurately. Herein, we exemplify a procedure that combines a bioinformatics tool for complex selection with in vivo validation, to deliver structural results in a medium-throughout manner. We have selected a set of 20 yeast complexes, which were predicted to be feasible by either an automated bioinformatics algorithm, by manual inspection of primary data, or by literature searches. These complexes were validated with two straightforward and efficient biochemical assays, and heterologous expression technologies of complex components were then used to produce the complexes to assess their feasibility experimentally. Approximately one-half of the selected complexes were useful for structural studies, and we detail one particular success story. Our results underscore the importance of accurate target selection and validation in avoiding transient, unstable, or simply nonexistent complexes from the outset. Copyright © 2010 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassan, Tasnim; Lissenden, Cliff; Carroll, Laura
The proposed research will develop systematic sets of uniaxial and multiaxial experimental data at a very high temperature (850-950°C) for Alloy 617. The loading histories to be prescribed in the experiments will induce creep-fatigue and creep-ratcheting failure mechanisms. These experimental responses will be scrutinized in order to quantify the influences of temperature and creep on fatigue and ratcheting failures. A unified constitutive model (UCM) will be developed and validated against these experimental responses. The improved UCM will be incorporated into the widely used finite element commercial software packages ANSYS. The modified ANSYS will be validated so that it can bemore » used for evaluating the very high temperature ASME-NH design-by-analysis methodology for Alloy 617 and thereby addressing the ASME-NH design code issues.« less
Corporate Entrepreneurship Assessment Instrument (CEAI): Systematic Validation of a Measure
2006-03-01
CORPORATE ENTREPRENEURSHIP ASSESSMENT INSTRUMENT (CEAI): SYSTEMATIC VALIDATION OF A MEASURE THESIS...the United States Government. AFIT/GIR/ENV/06M-05 CORPORATE ENTREPRENEURSHIP ASSESSMENT INSTRUMENT (CEAI): SYSTEMATIC VALIDATION...DISTRIBUTION UNLIMITED. AFIT/GIR/ENV/06M-05 CORPORATE ENTREPRENEURSHIP ASSESSMENT INSTRUMENT (CEAI): SYSTEMATIC VALIDATION OF A MEASURE
Xie, Yi; Mun, Sungyong; Kim, Jinhyun; Wang, Nien-Hwa Linda
2002-01-01
A tandem simulated moving bed (SMB) process for insulin purification has been proposed and validated experimentally. The mixture to be separated consists of insulin, high molecular weight proteins, and zinc chloride. A systematic approach based on the standing wave design, rate model simulations, and experiments was used to develop this multicomponent separation process. The standing wave design was applied to specify the SMB operating conditions of a lab-scale unit with 10 columns. The design was validated with rate model simulations prior to experiments. The experimental results show 99.9% purity and 99% yield, which closely agree with the model predictions and the standing wave design targets. The agreement proves that the standing wave design can ensure high purity and high yield for the tandem SMB process. Compared to a conventional batch SEC process, the tandem SMB has 10% higher yield, 400% higher throughput, and 72% lower eluant consumption. In contrast, a design that ignores the effects of mass transfer and nonideal flow cannot meet the purity requirement and gives less than 96% yield.
Ravikumar, Balaguru; Parri, Elina; Timonen, Sanna; Airola, Antti; Wennerberg, Krister
2017-01-01
Due to relatively high costs and labor required for experimental profiling of the full target space of chemical compounds, various machine learning models have been proposed as cost-effective means to advance this process in terms of predicting the most potent compound-target interactions for subsequent verification. However, most of the model predictions lack direct experimental validation in the laboratory, making their practical benefits for drug discovery or repurposing applications largely unknown. Here, we therefore introduce and carefully test a systematic computational-experimental framework for the prediction and pre-clinical verification of drug-target interactions using a well-established kernel-based regression algorithm as the prediction model. To evaluate its performance, we first predicted unmeasured binding affinities in a large-scale kinase inhibitor profiling study, and then experimentally tested 100 compound-kinase pairs. The relatively high correlation of 0.77 (p < 0.0001) between the predicted and measured bioactivities supports the potential of the model for filling the experimental gaps in existing compound-target interaction maps. Further, we subjected the model to a more challenging task of predicting target interactions for such a new candidate drug compound that lacks prior binding profile information. As a specific case study, we used tivozanib, an investigational VEGF receptor inhibitor with currently unknown off-target profile. Among 7 kinases with high predicted affinity, we experimentally validated 4 new off-targets of tivozanib, namely the Src-family kinases FRK and FYN A, the non-receptor tyrosine kinase ABL1, and the serine/threonine kinase SLK. Our sub-sequent experimental validation protocol effectively avoids any possible information leakage between the training and validation data, and therefore enables rigorous model validation for practical applications. These results demonstrate that the kernel-based modeling approach offers practical benefits for probing novel insights into the mode of action of investigational compounds, and for the identification of new target selectivities for drug repurposing applications. PMID:28787438
Systematic Validation of Protein Force Fields against Experimental Data
Eastwood, Michael P.; Dror, Ron O.; Shaw, David E.
2012-01-01
Molecular dynamics simulations provide a vehicle for capturing the structures, motions, and interactions of biological macromolecules in full atomic detail. The accuracy of such simulations, however, is critically dependent on the force field—the mathematical model used to approximate the atomic-level forces acting on the simulated molecular system. Here we present a systematic and extensive evaluation of eight different protein force fields based on comparisons of experimental data with molecular dynamics simulations that reach a previously inaccessible timescale. First, through extensive comparisons with experimental NMR data, we examined the force fields' abilities to describe the structure and fluctuations of folded proteins. Second, we quantified potential biases towards different secondary structure types by comparing experimental and simulation data for small peptides that preferentially populate either helical or sheet-like structures. Third, we tested the force fields' abilities to fold two small proteins—one α-helical, the other with β-sheet structure. The results suggest that force fields have improved over time, and that the most recent versions, while not perfect, provide an accurate description of many structural and dynamical properties of proteins. PMID:22384157
NASA Astrophysics Data System (ADS)
Batic, Matej; Begalli, Marcia; Han, Min Cheol; Hauf, Steffen; Hoff, Gabriela; Kim, Chan Hyeong; Kim, Han Sung; Grazia Pia, Maria; Saracco, Paolo; Weidenspointner, Georg
2014-06-01
A systematic review of methods and data for the Monte Carlo simulation of photon interactions is in progress: it concerns a wide set of theoretical modeling approaches and data libraries available for this purpose. Models and data libraries are assessed quantitatively with respect to an extensive collection of experimental measurements documented in the literature to determine their accuracy; this evaluation exploits rigorous statistical analysis methods. The computational performance of the associated modeling algorithms is evaluated as well. An overview of the assessment of photon interaction models and results of the experimental validation are presented.
Barone, M; Cogliandro, A; Salzillo, R; Tambone, V; Persichetti, P
2018-06-19
The objectives of the current study were: (1) to perform a systematic review of the existing scientific literature on appearance and any subsequently related disorders and (2) to research in the literature the correlation between the role of appearance and patient's disease. A systematic review protocol was developed a priori in accordance with the Preferred Reporting for Items for Systematic Reviews and Meta-Analyses-Protocols (PRISMA-P) guidance. A multistep search of the PubMed, MEDLINE, PreMEDLINE, Embase, Ebase, CINAHL, PsychINFO and Cochrane databases was performed to identify studies on patient satisfaction, quality of life, and body image. Our search generated a total of 347 articles. We performed a systematic review of the 18 studies, which had sufficient data and met all inclusion criteria. All studies identified from the literature review were assessed to determine the utilization of validated patient satisfaction questionnaires. The questionnaires were analyzed by reviewers to assess adherence to the rules of the US Food and Drug Administration and the Scientific Advisory Committee of the Medical Outcomes Trust. We identified 27 individual questionnaires. We summarized development and validation characteristics and content of the 27 validated measures used in the studies. This is the first systematic review to identify and critically appraise patient-reported outcome measures for appearance and body image using internationally accepted criteria. DAS59 was deemed to have adequate levels of methodological and psychometric evidence. We also introduced the concept of Appearance-Pain which consists of the recomposed systematic view of the experimental indicators of suffering, linked to one of the dimensions of appearance. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Verification and Validation Studies for the LAVA CFD Solver
NASA Technical Reports Server (NTRS)
Moini-Yekta, Shayan; Barad, Michael F; Sozer, Emre; Brehm, Christoph; Housman, Jeffrey A.; Kiris, Cetin C.
2013-01-01
The verification and validation of the Launch Ascent and Vehicle Aerodynamics (LAVA) computational fluid dynamics (CFD) solver is presented. A modern strategy for verification and validation is described incorporating verification tests, validation benchmarks, continuous integration and version control methods for automated testing in a collaborative development environment. The purpose of the approach is to integrate the verification and validation process into the development of the solver and improve productivity. This paper uses the Method of Manufactured Solutions (MMS) for the verification of 2D Euler equations, 3D Navier-Stokes equations as well as turbulence models. A method for systematic refinement of unstructured grids is also presented. Verification using inviscid vortex propagation and flow over a flat plate is highlighted. Simulation results using laminar and turbulent flow past a NACA 0012 airfoil and ONERA M6 wing are validated against experimental and numerical data.
2-D Circulation Control Airfoil Benchmark Experiments Intended for CFD Code Validation
NASA Technical Reports Server (NTRS)
Englar, Robert J.; Jones, Gregory S.; Allan, Brian G.; Lin, Johb C.
2009-01-01
A current NASA Research Announcement (NRA) project being conducted by Georgia Tech Research Institute (GTRI) personnel and NASA collaborators includes the development of Circulation Control (CC) blown airfoils to improve subsonic aircraft high-lift and cruise performance. The emphasis of this program is the development of CC active flow control concepts for both high-lift augmentation, drag control, and cruise efficiency. A collaboration in this project includes work by NASA research engineers, whereas CFD validation and flow physics experimental research are part of NASA s systematic approach to developing design and optimization tools for CC applications to fixed-wing aircraft. The design space for CESTOL type aircraft is focusing on geometries that depend on advanced flow control technologies that include Circulation Control aerodynamics. The ability to consistently predict advanced aircraft performance requires improvements in design tools to include these advanced concepts. Validation of these tools will be based on experimental methods applied to complex flows that go beyond conventional aircraft modeling techniques. This paper focuses on recent/ongoing benchmark high-lift experiments and CFD efforts intended to provide 2-D CFD validation data sets related to NASA s Cruise Efficient Short Take Off and Landing (CESTOL) study. Both the experimental data and related CFD predictions are discussed.
Szymczynska, P; Walsh, S; Greenberg, L; Priebe, S
2017-07-01
Essential criteria for the methodological quality and validity of randomized controlled trials are the drop-out rates from both the experimental intervention and the study as a whole. This systematic review and meta-analysis assessed these drop-out rates in non-pharmacological schizophrenia trials. A systematic literature search was used to identify relevant trials with ≥100 sample size and to extract the drop-out data. The rates of drop-out from the experimental intervention and study were calculated with meta-analysis of proportions. Meta-regression was applied to explore the association between the study and sample characteristics and the drop-out rates. 43 RCTs were found, with drop-out from intervention ranging from 0% to 63% and study drop-out ranging from 4% to 71%. Meta-analyses of proportions showed an overall drop-out rate of 14% (95% CI: 13-15%) at the experimental intervention level and 20% (95% CI: 17-24%) at the study level. Meta-regression showed that the active intervention drop-out rates were predicted by the number of intervention sessions. In non-pharmacological schizophrenia trials, drop-out rates of less than 20% can be achieved for both the study and the experimental intervention. A high heterogeneity of drop-out rates across studies shows that even lower rates are achievable. Copyright © 2017 Elsevier Ltd. All rights reserved.
Experimental and computational fluid dynamic studies of mixing for complex oral health products
NASA Astrophysics Data System (ADS)
Garcia, Marti Cortada; Mazzei, Luca; Angeli, Panagiota
2015-11-01
Mixing high viscous non-Newtonian fluids is common in the consumer health industry. Sometimes this process is empirical and involves many pilot plants trials which are product specific. The first step to study the mixing process is to build on knowledge on the rheology of the fluids involved. In this research a systematic approach is used to validate the rheology of two liquids: glycerol and a gel formed by polyethylene glycol and carbopol. Initially, the constitutive equation is determined which relates the viscosity of the fluids with temperature, shear rate, and concentration. The key variable for the validation is the power required for mixing, which can be obtained both from CFD and experimentally using a stirred tank and impeller of well-defined geometries at different impeller speeds. A good agreement between the two values indicates a successful validation of the rheology and allows the CFD model to be used for the study of mixing in the complex vessel geometries and increased sizes encountered during scale up.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vlcek, Lukas; Chialvo, Ariel; Simonson, J Michael
2013-01-01
Molecular models and experimental estimates based on the cluster pair approximation (CPA) provide inconsistent predictions of absolute single-ion hydration properties. To understand the origin of this discrepancy we used molecular simulations to study the transition between hydration of alkali metal and halide ions in small aqueous clusters and bulk water. The results demonstrate that the assumptions underlying the CPA are not generally valid as a result of a significant shift in the ion hydration free energies (~15 kJ/mol) and enthalpies (~47 kJ/mol) in the intermediate range of cluster sizes. When this effect is accounted for, the systematic differences between modelsmore » and experimental predictions disappear, and the value of absolute proton hydration enthalpy based on the CPA gets in closer agreement with other estimates.« less
Lee, Joseph G L; Gregory, Kyle R; Baker, Hannah M; Ranney, Leah M; Goldstein, Adam O
2016-01-01
Most smokers become addicted to tobacco products before they are legally able to purchase these products. We systematically reviewed the literature on protocols to assess underage purchase and their ecological validity. We conducted a systematic search in May 2015 in PubMed and PsycINFO. We independently screened records for inclusion. We conducted a narrative review and examined implications of two types of legal authority for protocols that govern underage buy enforcement in the United States: criminal (state-level laws prohibiting sales to youth) and administrative (federal regulations prohibiting sales to youth). Ten studies experimentally assessed underage buy protocols and 44 studies assessed the association between youth characteristics and tobacco sales. Protocols that mimicked real-world youth behaviors were consistently associated with substantially greater likelihood of a sale to a youth. Many of the tested protocols appear to be designed for compliance with criminal law rather than administrative enforcement in ways that limited ecological validity. This may be due to concerns about entrapment. For administrative enforcement in particular, entrapment may be less of an issue than commonly thought. Commonly used underage buy protocols poorly represent the reality of youths' access to tobacco from retailers. Compliance check programs should allow youth to present themselves naturally and attempt to match the community's demographic makeup.
Lee, Joseph G. L.; Gregory, Kyle R.; Baker, Hannah M.; Ranney, Leah M.; Goldstein, Adam O.
2016-01-01
Most smokers become addicted to tobacco products before they are legally able to purchase these products. We systematically reviewed the literature on protocols to assess underage purchase and their ecological validity. We conducted a systematic search in May 2015 in PubMed and PsycINFO. We independently screened records for inclusion. We conducted a narrative review and examined implications of two types of legal authority for protocols that govern underage buy enforcement in the United States: criminal (state-level laws prohibiting sales to youth) and administrative (federal regulations prohibiting sales to youth). Ten studies experimentally assessed underage buy protocols and 44 studies assessed the association between youth characteristics and tobacco sales. Protocols that mimicked real-world youth behaviors were consistently associated with substantially greater likelihood of a sale to a youth. Many of the tested protocols appear to be designed for compliance with criminal law rather than administrative enforcement in ways that limited ecological validity. This may be due to concerns about entrapment. For administrative enforcement in particular, entrapment may be less of an issue than commonly thought. Commonly used underage buy protocols poorly represent the reality of youths' access to tobacco from retailers. Compliance check programs should allow youth to present themselves naturally and attempt to match the community’s demographic makeup. PMID:27050671
Consumption of chocolate in pregnant women and risk of preeclampsia: a systematic review.
Mogollon, Jaime Andres; Boivin, Catherine; Philippe, Kadhel; Turcotte, Stéphane; Lemieux, Simone; Blanchet, Claudine; Bujold, Emmanuel; Dodin, Sylvie
2013-12-20
Previous studies have been limited in reporting the association between chocolate consumption, measured by interviewer-administered questionnaire or serum theobromine, a biomarker for cocoa, and risk of preeclampsia, and have showed somewhat conflicting results. A systematic review of observational and experimental studies will be carried out. We will examine PubMed, Embase, and the entire Cochrane Library. Studies of chocolate consumption compared or not with placebo or low flavanol chocolate during pregnancy will be evaluated to investigate the effect of chocolate consumption in pregnant women on the risk of preeclampsia or pregnancy-induced hypertension. Screening for inclusion, data extraction, and quality assessment will be performed independently by two reviewers in consultation with a third reviewer. Validity of the studies will be ascertained by using the Cochrane Collaboration's tool. Relative risk of preeclampsia will be the primary measure of treatment effect. Heterogeneity will be explored by subgroup analysis according to confounding factors and bias. This systematic review will contribute to establish the current state of knowledge concerning the possible association between chocolate consumption and prevention of preeclampsia. Furthermore, it will justify if additional experimental trials are necessary to better evaluate the benefits of chocolate consumption on the risk of preeclampsia. This systematic review has been registered in the PROSPERO international prospective register of systematic reviews. The registration number is: CRD42013005338.
Super earth interiors and validity of Birch's Law for ultra-high pressure metals and ionic solids
NASA Astrophysics Data System (ADS)
Ware, Lucas Andrew
2015-01-01
Super Earths, recently detected by the Kepler Mission, expand the ensemble of known terrestrial planets beyond our Solar System's limited group. Birch's Law and velocity-density systematics have been crucial in constraining our knowledge of the composition of Earth's mantle and core. Recently published static diamond anvil cell experimental measurements of sound velocities in iron, a key deep element in most super Earth models, are inconsistent with each other with regard to the validity of Birch's Law. We examine the range of validity of Birch's Law for several metallic elements, including iron, and ionic solids shocked with a two-stage light gas gun into the ultra-high pressure, temperature fluid state and make comparisons to the recent static data.
Ong, Robert H.; King, Andrew J. C.; Mullins, Benjamin J.; Cooper, Timothy F.; Caley, M. Julian
2012-01-01
We present Computational Fluid Dynamics (CFD) models of the coupled dynamics of water flow, heat transfer and irradiance in and around corals to predict temperatures experienced by corals. These models were validated against controlled laboratory experiments, under constant and transient irradiance, for hemispherical and branching corals. Our CFD models agree very well with experimental studies. A linear relationship between irradiance and coral surface warming was evident in both the simulation and experimental result agreeing with heat transfer theory. However, CFD models for the steady state simulation produced a better fit to the linear relationship than the experimental data, likely due to experimental error in the empirical measurements. The consistency of our modelling results with experimental observations demonstrates the applicability of CFD simulations, such as the models developed here, to coral bleaching studies. A study of the influence of coral skeletal porosity and skeletal bulk density on surface warming was also undertaken, demonstrating boundary layer behaviour, and interstitial flow magnitude and temperature profiles in coral cross sections. Our models compliment recent studies showing systematic changes in these parameters in some coral colonies and have utility in the prediction of coral bleaching. PMID:22701582
Seo, Hyun-Ju; Kim, Soo Young; Lee, Yoon Jae; Jang, Bo-Hyoung; Park, Ji-Eun; Sheen, Seung-Soo; Hahn, Seo Kyung
2016-02-01
To develop a study Design Algorithm for Medical Literature on Intervention (DAMI) and test its interrater reliability, construct validity, and ease of use. We developed and then revised the DAMI to include detailed instructions. To test the DAMI's reliability, we used a purposive sample of 134 primary, mainly nonrandomized studies. We then compared the study designs as classified by the original authors and through the DAMI. Unweighted kappa statistics were computed to test interrater reliability and construct validity based on the level of agreement between the original and DAMI classifications. Assessment time was also recorded to evaluate ease of use. The DAMI includes 13 study designs, including experimental and observational studies of interventions and exposure. Both the interrater reliability (unweighted kappa = 0.67; 95% CI [0.64-0.75]) and construct validity (unweighted kappa = 0.63, 95% CI [0.52-0.67]) were substantial. Mean classification time using the DAMI was 4.08 ± 2.44 minutes (range, 0.51-10.92). The DAMI showed substantial interrater reliability and construct validity. Furthermore, given its ease of use, it could be used to accurately classify medical literature for systematic reviews of interventions although minimizing disagreement between authors of such reviews. Copyright © 2016 Elsevier Inc. All rights reserved.
Koda, Hiroki; Basile, Muriel; Olivier, Marion; Remeuf, Kevin; Nagumo, Sumiharu; Blois-Heulin, Catherine; Lemasson, Alban
2013-08-01
The central position and universality of music in human societies raises the question of its phylogenetic origin. One of the most important properties of music involves harmonic musical intervals, in response to which humans show a spontaneous preference for consonant over dissonant sounds starting from early human infancy. Comparative studies conducted with organisms at different levels of the primate lineage are needed to understand the evolutionary scenario under which this phenomenon emerged. Although previous research found no preference for consonance in a New World monkey species, the question remained opened for Old World monkeys. We used an experimental paradigm based on a sensory reinforcement procedure to test auditory preferences for consonant sounds in Campbell's monkeys (Cercopithecus campbelli campbelli), an Old World monkey species. Although a systematic preference for soft (70 dB) over loud (90 dB) control white noise was found, Campbell's monkeys showed no preference for either consonant or dissonant sounds. The preference for soft white noise validates our noninvasive experimental paradigm, which can be easily reused in any captive facility to test for auditory preferences. This would suggest that human preference for consonant sounds is not systematically shared with New and Old World monkeys. The sensitivity for harmonic musical intervals emerged probably very late in the primate lineage.
Görgen, Kai; Hebart, Martin N; Allefeld, Carsten; Haynes, John-Dylan
2017-12-27
Standard neuroimaging data analysis based on traditional principles of experimental design, modelling, and statistical inference is increasingly complemented by novel analysis methods, driven e.g. by machine learning methods. While these novel approaches provide new insights into neuroimaging data, they often have unexpected properties, generating a growing literature on possible pitfalls. We propose to meet this challenge by adopting a habit of systematic testing of experimental design, analysis procedures, and statistical inference. Specifically, we suggest to apply the analysis method used for experimental data also to aspects of the experimental design, simulated confounds, simulated null data, and control data. We stress the importance of keeping the analysis method the same in main and test analyses, because only this way possible confounds and unexpected properties can be reliably detected and avoided. We describe and discuss this Same Analysis Approach in detail, and demonstrate it in two worked examples using multivariate decoding. With these examples, we reveal two sources of error: A mismatch between counterbalancing (crossover designs) and cross-validation which leads to systematic below-chance accuracies, and linear decoding of a nonlinear effect, a difference in variance. Copyright © 2017 Elsevier Inc. All rights reserved.
Porter, Kathleen; Estabrooks, Paul; Zoellner, Jamie
2016-01-01
Background Sugar-sweetened beverage (SSB) consumption among children and adolescents is a determinant of childhood obesity. Many programs to reduce consumption across the socio-ecological model report significant positive results; however, the generalizability of the results, including whether reporting differences exist among socio-ecological strategy levels, is unknown. Objectives This systematic review aims to (1) examine the extent to which studies reported internal and external validity indicators defined by RE-AIM (reach, effectiveness, adoption, implementation, maintenance) and (2) assess reporting differences by socio-ecological level: intrapersonal/interpersonal (Level 1), environmental/policy (Level 2), multi-level (Combined Level). Methods Six major databases (PubMed, Web of Science, Cinahl, CAB Abstracts, ERIC, and Agiricola) systematic literature review was conducted to identify studies from 2004–2015 meeting inclusion criteria (targeting children aged 3–12, adolescents 13–17, and young adults 18 years, experimental/quasi-experimental, substantial SSB component). Interventions were categorized by socio-ecological level, and data were extracted using a validated RE-AIM protocol. A one-way ANOVA assessed differences between levels. Results There were 55 eligible studies (N) accepted, including 21 Level 1, 18 Level 2, and 16 Combined Level studies. Thirty-six (65%) were conducted in the USA, 19 (35%) internationally, and 39 (71%) were implemented in schools. Across levels, reporting averages were low for all RE-AIM dimensions (reach=29%, efficacy/effectiveness=45%, adoption=26%, implementation=27%, maintenance=14%). Level 2 studies had significantly lower reporting on reach and effectiveness (10% and 26%, respectively) compared to Level 1 (44%, 57%) or Combined Level studies (31%, 52%) (p<0.001). Adoption, implementation, and maintenance reporting did not vary among levels. Conclusion Interventions to reduce SSB in children and adolescents across the socio-ecological spectrum do not provide the necessary information for dissemination and implementation in community nutrition settings. Future interventions should address both internal and external validity to maximize population impact. PMID:27262383
Ross, Vincent; Dion, Denis; St-Germain, Daniel
2012-05-01
Radiometric images taken in mid-wave and long-wave infrared bands are used as a basis for validating a sea surface bidirectional reflectance distribution function (BRDF) being implemented into MODTRAN 5 (Berk et al. [Proc. SPIE5806, 662 (2005)]). The images were obtained during the MIRAMER campaign that took place in May 2008 in the Mediterranean Sea near Toulon, France. When atmosphere radiances are matched at the horizon to remove possible calibration offsets, the implementation of the BRDF in MODTRAN produces good sea surface radiance agreement, usually within 2% and at worst 4% from off-glint azimuthally averaged measurements. Simulations also compare quite favorably to glint measurements. The observed sea radiance deviations between model and measurements are not systematic, and are well within expected experimental uncertainties. This is largely attributed to proper radiative coupling between the surface and the atmosphere implemented using the DISORT multiple scattering algorithm.
NASA Astrophysics Data System (ADS)
Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.
2016-05-01
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.
Chen, Weixin; Chen, Jianye; Lu, Wangjin; Chen, Lei; Fu, Danwen
2012-01-01
Real-time reverse transcription PCR (RT-qPCR) is a preferred method for rapid and accurate quantification of gene expression studies. Appropriate application of RT-qPCR requires accurate normalization though the use of reference genes. As no single reference gene is universally suitable for all experiments, thus reference gene(s) validation under different experimental conditions is crucial for RT-qPCR analysis. To date, only a few studies on reference genes have been done in other plants but none in papaya. In the present work, we selected 21 candidate reference genes, and evaluated their expression stability in 246 papaya fruit samples using three algorithms, geNorm, NormFinder and RefFinder. The samples consisted of 13 sets collected under different experimental conditions, including various tissues, different storage temperatures, different cultivars, developmental stages, postharvest ripening, modified atmosphere packaging, 1-methylcyclopropene (1-MCP) treatment, hot water treatment, biotic stress and hormone treatment. Our results demonstrated that expression stability varied greatly between reference genes and that different suitable reference gene(s) or combination of reference genes for normalization should be validated according to the experimental conditions. In general, the internal reference genes EIF (Eukaryotic initiation factor 4A), TBP1 (TATA binding protein 1) and TBP2 (TATA binding protein 2) genes had a good performance under most experimental conditions, whereas the most widely present used reference genes, ACTIN (Actin 2), 18S rRNA (18S ribosomal RNA) and GAPDH (Glyceraldehyde-3-phosphate dehydrogenase) were not suitable in many experimental conditions. In addition, two commonly used programs, geNorm and Normfinder, were proved sufficient for the validation. This work provides the first systematic analysis for the selection of superior reference genes for accurate transcript normalization in papaya under different experimental conditions. PMID:22952972
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horstemeyer, Mark R.; Chaudhuri, Santanu
2015-09-30
A multiscale modeling Internal State Variable (ISV) constitutive model was developed that captures the fundamental structure-property relationships. The macroscale ISV model used lower length scale simulations (Butler-Volmer and Electronics Structures results) in order to inform the ISVs at the macroscale. The chemomechanical ISV model was calibrated and validated from experiments with magnesium (Mg) alloys that were investigated under corrosive environments coupled with experimental electrochemical studies. Because the ISV chemomechanical model is physically based, it can be used for other material systems to predict corrosion behavior. As such, others can use the chemomechanical model for analyzing corrosion effects on their designs.
NASA National Combustion Code Simulations
NASA Technical Reports Server (NTRS)
Iannetti, Anthony; Davoudzadeh, Farhad
2001-01-01
A systematic effort is in progress to further validate the National Combustion Code (NCC) that has been developed at NASA Glenn Research Center (GRC) for comprehensive modeling and simulation of aerospace combustion systems. The validation efforts include numerical simulation of the gas-phase combustor experiments conducted at the Center for Turbulence Research (CTR), Stanford University, followed by comparison and evaluation of the computed results with the experimental data. Presently, at GRC, a numerical model of the experimental gaseous combustor is built to simulate the experimental model. The constructed numerical geometry includes the flow development sections for air annulus and fuel pipe, 24 channel air and fuel swirlers, hub, combustor, and tail pipe. Furthermore, a three-dimensional multi-block, multi-grid grid (1.6 million grid points, 3-levels of multi-grid) is generated. Computational simulation of the gaseous combustor flow field operating on methane fuel has started. The computational domain includes the whole flow regime starting from the fuel pipe and the air annulus, through the 12 air and 12 fuel channels, in the combustion region and through the tail pipe.
A TRPV2 interactome-based signature for prognosis in glioblastoma patients.
Doñate-Macián, Pau; Gómez, Antonio; Dégano, Irene R; Perálvarez-Marín, Alex
2018-04-06
Proteomics aids to the discovery and expansion of protein-protein interaction networks, which are key to understand molecular mechanisms in physiology and physiopathology, but also to infer protein function in a guilt-by-association fashion. In this study we use a systematic protein-protein interaction membrane yeast two-hybrid method to expand the interactome of TRPV2, a cation channel related to nervous system development. After validation of the interactome in silico , we define a TRPV2-interactome signature combining proteomics with the available physio-pathological data in Disgenet to find interactome-disease associations, highlighting nervous system disorders and neoplasms. The TRPV2-interactome signature against available experimental data is capable of discriminating overall risk in glioblastoma multiforme prognosis, progression, recurrence, and chemotherapy resistance. Beyond the impact on glioblastoma physiopathology, this study shows that combining systematic proteomics with in silico methods and available experimental data is key to open new perspectives to define novel biomarkers for diagnosis, prognosis and therapeutics in disease.
A TRPV2 interactome-based signature for prognosis in glioblastoma patients
Dégano, Irene R.; Perálvarez-Marín, Alex
2018-01-01
Proteomics aids to the discovery and expansion of protein-protein interaction networks, which are key to understand molecular mechanisms in physiology and physiopathology, but also to infer protein function in a guilt-by-association fashion. In this study we use a systematic protein-protein interaction membrane yeast two-hybrid method to expand the interactome of TRPV2, a cation channel related to nervous system development. After validation of the interactome in silico, we define a TRPV2-interactome signature combining proteomics with the available physio-pathological data in Disgenet to find interactome-disease associations, highlighting nervous system disorders and neoplasms. The TRPV2-interactome signature against available experimental data is capable of discriminating overall risk in glioblastoma multiforme prognosis, progression, recurrence, and chemotherapy resistance. Beyond the impact on glioblastoma physiopathology, this study shows that combining systematic proteomics with in silico methods and available experimental data is key to open new perspectives to define novel biomarkers for diagnosis, prognosis and therapeutics in disease. PMID:29719613
Fang, Jiansong; Wu, Zengrui; Cai, Chuipu; Wang, Qi; Tang, Yun; Cheng, Feixiong
2017-11-27
Natural products with diverse chemical scaffolds have been recognized as an invaluable source of compounds in drug discovery and development. However, systematic identification of drug targets for natural products at the human proteome level via various experimental assays is highly expensive and time-consuming. In this study, we proposed a systems pharmacology infrastructure to predict new drug targets and anticancer indications of natural products. Specifically, we reconstructed a global drug-target network with 7,314 interactions connecting 751 targets and 2,388 natural products and built predictive network models via a balanced substructure-drug-target network-based inference approach. A high area under receiver operating characteristic curve of 0.96 was yielded for predicting new targets of natural products during cross-validation. The newly predicted targets of natural products (e.g., resveratrol, genistein, and kaempferol) with high scores were validated by various literature studies. We further built the statistical network models for identification of new anticancer indications of natural products through integration of both experimentally validated and computationally predicted drug-target interactions of natural products with known cancer proteins. We showed that the significantly predicted anticancer indications of multiple natural products (e.g., naringenin, disulfiram, and metformin) with new mechanism-of-action were validated by various published experimental evidence. In summary, this study offers powerful computational systems pharmacology approaches and tools for the development of novel targeted cancer therapies by exploiting the polypharmacology of natural products.
Bandyopadhyay, Sanghamitra; Mitra, Ramkrishna
2009-10-15
Prediction of microRNA (miRNA) target mRNAs using machine learning approaches is an important area of research. However, most of the methods suffer from either high false positive or false negative rates. One reason for this is the marked deficiency of negative examples or miRNA non-target pairs. Systematic identification of non-target mRNAs is still not addressed properly, and therefore, current machine learning approaches are compelled to rely on artificially generated negative examples for training. In this article, we have identified approximately 300 tissue-specific negative examples using a novel approach that involves expression profiling of both miRNAs and mRNAs, miRNA-mRNA structural interactions and seed-site conservation. The newly generated negative examples are validated with pSILAC dataset, which elucidate the fact that the identified non-targets are indeed non-targets.These high-throughput tissue-specific negative examples and a set of experimentally verified positive examples are then used to build a system called TargetMiner, a support vector machine (SVM)-based classifier. In addition to assessing the prediction accuracy on cross-validation experiments, TargetMiner has been validated with a completely independent experimental test dataset. Our method outperforms 10 existing target prediction algorithms and provides a good balance between sensitivity and specificity that is not reflected in the existing methods. We achieve a significantly higher sensitivity and specificity of 69% and 67.8% based on a pool of 90 feature set and 76.5% and 66.1% using a set of 30 selected feature set on the completely independent test dataset. In order to establish the effectiveness of the systematically generated negative examples, the SVM is trained using a different set of negative data generated using the method in Yousef et al. A significantly higher false positive rate (70.6%) is observed when tested on the independent set, while all other factors are kept the same. Again, when an existing method (NBmiRTar) is executed with the our proposed negative data, we observe an improvement in its performance. These clearly establish the effectiveness of the proposed approach of selecting the negative examples systematically. TargetMiner is now available as an online tool at www.isical.ac.in/ approximately bioinfo_miu
Spatial Variation of Pressure in the Lyophilization Product Chamber Part 1: Computational Modeling.
Ganguly, Arnab; Varma, Nikhil; Sane, Pooja; Bogner, Robin; Pikal, Michael; Alexeenko, Alina
2017-04-01
The flow physics in the product chamber of a freeze dryer involves coupled heat and mass transfer at different length and time scales. The low-pressure environment and the relatively small flow velocities make it difficult to quantify the flow structure experimentally. The current work presents the three-dimensional computational fluid dynamics (CFD) modeling for vapor flow in a laboratory scale freeze dryer validated with experimental data and theory. The model accounts for the presence of a non-condensable gas such as nitrogen or air using a continuum multi-species model. The flow structure at different sublimation rates, chamber pressures, and shelf-gaps are systematically investigated. Emphasis has been placed on accurately predicting the pressure variation across the subliming front. At a chamber set pressure of 115 mtorr and a sublimation rate of 1.3 kg/h/m 2 , the pressure variation reaches about 9 mtorr. The pressure variation increased linearly with sublimation rate in the range of 0.5 to 1.3 kg/h/m 2 . The dependence of pressure variation on the shelf-gap was also studied both computationally and experimentally. The CFD modeling results are found to agree within 10% with the experimental measurements. The computational model was also compared to analytical solution valid for small shelf-gaps. Thus, the current work presents validation study motivating broader use of CFD in optimizing freeze-drying process and equipment design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less
Sensitivity of shock boundary-layer interactions to weak geometric perturbations
NASA Astrophysics Data System (ADS)
Kim, Ji Hoon; Eaton, John K.
2016-11-01
Shock-boundary layer interactions can be sensitive to small changes in the inlet flow and boundary conditions. Robust computational models must capture this sensitivity, and validation of such models requires a suitable experimental database with well-defined inlet and boundary conditions. To that end, the purpose of this experiment is to systematically document the effects of small geometric perturbations on a SBLI flow to investigate the flow physics and establish an experimental dataset tailored for CFD validation. The facility used is a Mach 2.1, continuous operation wind tunnel. The SBLI is generated using a compression wedge; the region of interest is the resulting reflected shock SBLI. The geometric perturbations, which are small spanwise rectangular prisms, are introduced ahead of the compression ramp on the opposite wall. PIV is used to study the SBLI for 40 different perturbation geometries. Results show that the dominant effect of the perturbations is a global shift of the SBLI itself. In addition, the bumps introduce weaker shocks of varying strength and angles, depending on the bump height and location. Various scalar validation metrics, including a measure of shock unsteadiness, and their uncertainties are also computed to better facilitate CFD validation. Ji Hoon Kim is supported by an OTR Stanford Graduate Fellowship.
R. A. Fisher and his advocacy of randomization.
Hall, Nancy S
2007-01-01
The requirement of randomization in experimental design was first stated by R. A. Fisher, statistician and geneticist, in 1925 in his book Statistical Methods for Research Workers. Earlier designs were systematic and involved the judgment of the experimenter; this led to possible bias and inaccurate interpretation of the data. Fisher's dictum was that randomization eliminates bias and permits a valid test of significance. Randomization in experimenting had been used by Charles Sanders Peirce in 1885 but the practice was not continued. Fisher developed his concepts of randomizing as he considered the mathematics of small samples, in discussions with "Student," William Sealy Gosset. Fisher published extensively. His principles of experimental design were spread worldwide by the many "voluntary workers" who came from other institutions to Rothamsted Agricultural Station in England to learn Fisher's methods.
Le Roux, E; Mellerio, H; Guilmin-Crépon, S; Gottot, S; Jacquin, P; Boulkedid, R; Alberti, C
2017-01-01
Objective To explore the methodologies employed in studies assessing transition of care interventions, with the aim of defining goals for the improvement of future studies. Design Systematic review of comparative studies assessing transition to adult care interventions for young people with chronic conditions. Data sources MEDLINE, EMBASE, ClinicalTrial.gov. Eligibility criteria for selecting studies 2 reviewers screened comparative studies with experimental and quasi-experimental designs, published or registered before July 2015. Eligible studies evaluate transition interventions at least in part after transfer to adult care of young people with chronic conditions with at least one outcome assessed quantitatively. Results 39 studies were reviewed, 26/39 (67%) published their final results and 13/39 (33%) were in progress. In 9 studies (9/39, 23%) comparisons were made between preintervention and postintervention in a single group. Randomised control groups were used in 9/39 (23%) studies. 2 (2/39, 5%) reported blinding strategies. Use of validated questionnaires was reported in 28% (11/39) of studies. In terms of reporting in published studies 15/26 (58%) did not report age at transfer, and 6/26 (23%) did not report the time of collection of each outcome. Conclusions Few evaluative studies exist and their level of methodological quality is variable. The complexity of interventions, multiplicity of outcomes, difficulty of blinding and the small groups of patients have consequences on concluding on the effectiveness of interventions. The evaluation of the transition interventions requires an appropriate and common methodology which will provide access to a better level of evidence. We identified areas for improvement in terms of randomisation, recruitment and external validity, blinding, measurement validity, standardised assessment and reporting. Improvements will increase our capacity to determine effective interventions for transition care. PMID:28131998
Franzen, Lutz; Anderski, Juliane; Windbergs, Maike
2015-09-01
For rational development and evaluation of dermal drug delivery, the knowledge of rate and extent of substance penetration into the human skin is essential. However, current analytical procedures are destructive, labor intense and lack a defined spatial resolution. In this context, confocal Raman microscopy bares the potential to overcome current limitations in drug depth profiling. Confocal Raman microscopy already proved its suitability for the acquisition of qualitative penetration profiles, but a comprehensive investigation regarding its suitability for quantitative measurements inside the human skin is still missing. In this work, we present a systematic validation study to deploy confocal Raman microscopy for quantitative drug depth profiling in human skin. After we validated our Raman microscopic setup, we successfully established an experimental procedure that allows correlating the Raman signal of a model drug with its controlled concentration in human skin. To overcome current drawbacks in drug depth profiling, we evaluated different modes of peak correlation for quantitative Raman measurements and offer a suitable operating procedure for quantitative drug depth profiling in human skin. In conclusion, we successfully demonstrate the potential of confocal Raman microscopy for quantitative drug depth profiling in human skin as valuable alternative to destructive state-of-the-art techniques. Copyright © 2015 Elsevier B.V. All rights reserved.
Experimental validation of a quasi-steady theory for the flow through the glottis
NASA Astrophysics Data System (ADS)
Vilain, C. E.; Pelorson, X.; Fraysse, C.; Deverge, M.; Hirschberg, A.; Willems, J.
2004-09-01
In this paper a theoretical description of the flow through the glottis based on a quasi-steady boundary layer theory is presented. The Thwaites method is used to solve the von Kármán equations within the boundary layers. In practice this makes the theory much easier to use compared to Pohlhausen's polynomial approximations. This theoretical description is evaluated on the basis of systematic comparison with experimental data obtained under steady flow or unsteady (oscillating) flow without and with moving vocal folds. Results tend to show that the theory reasonably explains the measured data except when unsteady or viscous terms become predominant. This happens particularly during the collision of the vocal folds.
Quasi-experimental study designs series-paper 6: risk of bias assessment.
Waddington, Hugh; Aloe, Ariel M; Becker, Betsy Jane; Djimeu, Eric W; Hombrados, Jorge Garcia; Tugwell, Peter; Wells, George; Reeves, Barney
2017-09-01
Rigorous and transparent bias assessment is a core component of high-quality systematic reviews. We assess modifications to existing risk of bias approaches to incorporate rigorous quasi-experimental approaches with selection on unobservables. These are nonrandomized studies using design-based approaches to control for unobservable sources of confounding such as difference studies, instrumental variables, interrupted time series, natural experiments, and regression-discontinuity designs. We review existing risk of bias tools. Drawing on these tools, we present domains of bias and suggest directions for evaluation questions. The review suggests that existing risk of bias tools provide, to different degrees, incomplete transparent criteria to assess the validity of these designs. The paper then presents an approach to evaluating the internal validity of quasi-experiments with selection on unobservables. We conclude that tools for nonrandomized studies of interventions need to be further developed to incorporate evaluation questions for quasi-experiments with selection on unobservables. Copyright © 2017 Elsevier Inc. All rights reserved.
Gorguluarslan, Recep M; Choi, Seung-Kyum; Saldana, Christopher J
2017-07-01
A methodology is proposed for uncertainty quantification and validation to accurately predict the mechanical response of lattice structures used in the design of scaffolds. Effective structural properties of the scaffolds are characterized using a developed multi-level stochastic upscaling process that propagates the quantified uncertainties at strut level to the lattice structure level. To obtain realistic simulation models for the stochastic upscaling process and minimize the experimental cost, high-resolution finite element models of individual struts were reconstructed from the micro-CT scan images of lattice structures which are fabricated by selective laser melting. The upscaling method facilitates the process of determining homogenized strut properties to reduce the computational cost of the detailed simulation model for the scaffold. Bayesian Information Criterion is utilized to quantify the uncertainties with parametric distributions based on the statistical data obtained from the reconstructed strut models. A systematic validation approach that can minimize the experimental cost is also developed to assess the predictive capability of the stochastic upscaling method used at the strut level and lattice structure level. In comparison with physical compression test results, the proposed methodology of linking the uncertainty quantification with the multi-level stochastic upscaling method enabled an accurate prediction of the elastic behavior of the lattice structure with minimal experimental cost by accounting for the uncertainties induced by the additive manufacturing process. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tissue-Specific Analysis of Pharmacological Pathways.
Hao, Yun; Quinnies, Kayla; Realubit, Ronald; Karan, Charles; Tatonetti, Nicholas P
2018-06-19
Understanding the downstream consequences of pharmacologically targeted proteins is essential to drug design. Current approaches investigate molecular effects under tissue-naïve assumptions. Many target proteins, however, have tissue-specific expression. A systematic study connecting drugs to target pathways in in vivo human tissues is needed. We introduced a data-driven method that integrates drug-target relationships with gene expression, protein-protein interaction, and pathway annotation data. We applied our method to four independent genomewide expression datasets and built 467,396 connections between 1,034 drugs and 954 pathways in 259 human tissues or cell lines. We validated our results using data from L1000 and Pharmacogenomics Knowledgebase (PharmGKB), and observed high precision and recall. We predicted and tested anticoagulant effects of 22 compounds experimentally that were previously unknown, and used clinical data to validate these effects retrospectively. Our systematic study provides a better understanding of the cellular response to drugs and can be applied to many research topics in systems pharmacology. © 2018 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
da L D Barros, Manuella; Manhães-de-Castro, Raul; Alves, Daniele T; Quevedo, Omar Guzmán; Toscano, Ana Elisa; Bonnin, Alexandre; Galindo, Ligia
2018-06-08
Serotonin exerts a modulating function on the development of the central nervous system, including hypothalamic circuits controlling feeding behavior and energy expenditure. Based on the developmental plasticity theory, early disturbances of synaptic availability of serotonin may promote phenotypic adaptations and late disorders of energy balance regulation leading to obesity and associated diseases. The aim of this systematic review is to determine the effects of pharmacological neonatal inhibition of serotonin reuptake by fluoxetine, on parameters related to feeding behavior and energy balance. Literature searches were performed in Medline/PubMed and Lilacs databases, out of which 9726 studies were found. Using predefined protocol and registered on CAMARADES website, 23 studies were included for qualitative synthesis. The internal validity was assessed using the SYRCLE's risk of bias toll. Kappa index was also measured for analyzing the concordance between the reviewers. In addition, the PRISMA statement was used for reporting this systematic review. Most of the included studies demonstrated that neonatal serotonin reuptake inhibition is associated with long term reduced body weight, lower fat mass and higher thermogenic capacity and mitochondrial oxygen consumption in key metabolic tissues. Therefore, experimental fluoxetine exposure during neonatal development may promote long-term changes related to energy balance associated with a lean phenotype. Copyright © 2018 Elsevier B.V. All rights reserved.
Systematic harmonic power laws inter-relating multiple fundamental constants
NASA Astrophysics Data System (ADS)
Chakeres, Donald; Buckhanan, Wayne; Andrianarijaona, Vola
2017-01-01
Power laws and harmonic systems are ubiquitous in physics. We hypothesize that 2, π, the electron, Bohr radius, Rydberg constant, neutron, fine structure constant, Higgs boson, top quark, kaons, pions, muon, Tau, W, and Z when scaled in a common single unit are all inter-related by systematic harmonic powers laws. This implies that if the power law is known it is possible to derive a fundamental constant's scale in the absence of any direct experimental data of that constant. This is true for the case of the hydrogen constants. We created a power law search engine computer program that randomly generated possible positive or negative powers searching when the product of logical groups of constants equals 1, confirming they are physically valid. For 2, π, and the hydrogen constants the search engine found Planck's constant, Coulomb's energy law, and the kinetic energy law. The product of ratios defined by two constants each was the standard general format. The search engine found systematic resonant power laws based on partial harmonic fraction powers of the neutron for all of the constants with products near 1, within their known experimental precision, when utilized with appropriate hydrogen constants. We conclude that multiple fundamental constants are inter-related within a harmonic power law system.
The design, implementation, and evaluation of online credit nutrition courses: a systematic review.
Cohen, Nancy L; Carbone, Elena T; Beffa-Negrini, Patricia A
2011-01-01
To assess how postsecondary online nutrition education courses (ONEC) are delivered, determine ONEC effectiveness, identify theoretical models used, and identify future research needs. Systematic search of database literature. Postsecondary education. Nine research articles evaluating postsecondary ONEC. Knowledge/performance outcomes and student satisfaction, motivation, or perceptions. Systematic search of 922 articles and review of 9 articles meeting search criteria. Little research regarding ONEC marketing/management existed. Studies primarily evaluated introductory courses using email/websites (before 2000), or course management systems (after 2002). None used true experimental designs; just 3 addressed validity or reliability of measures or pilot-tested instruments. Three articles used theoretical models in course design; few used theories to guide evaluations. Four quasi-experimental studies indicated no differences in nutrition knowledge/performance between online and face-to-face learners. Results were inconclusive regarding student satisfaction, motivation, or perceptions. Students can gain knowledge in online as well as in face-to-face nutrition courses, but satisfaction was mixed. More up-to-date investigations on effective practices are warranted, using theories to identify factors that enhance student outcomes, addressing emerging technologies, and documenting ONEC marketing, management, and delivery. Adequate training/support for faculty is needed to improve student experiences and faculty time management. Copyright © 2011 Society for Nutrition Education. Published by Elsevier Inc. All rights reserved.
McGuckian, Thomas B; Cole, Michael H; Pepping, Gert-Jan
2018-04-01
To visually perceive opportunities for action, athletes rely on the movements of their eyes, head and body to explore their surrounding environment. To date, the specific types of technology and their efficacy for assessing the exploration behaviours of association footballers have not been systematically reviewed. This review aimed to synthesise the visual perception and exploration behaviours of footballers according to the task constraints, action requirements of the experimental task, and level of expertise of the athlete, in the context of the technology used to quantify the visual perception and exploration behaviours of footballers. A systematic search for papers that included keywords related to football, technology, and visual perception was conducted. All 38 included articles utilised eye-movement registration technology to quantify visual perception and exploration behaviour. The experimental domain appears to influence the visual perception behaviour of footballers, however no studies investigated exploration behaviours of footballers in open-play situations. Studies rarely utilised representative stimulus presentation or action requirements. To fully understand the visual perception requirements of athletes, it is recommended that future research seek to validate alternate technologies that are capable of investigating the eye, head and body movements associated with the exploration behaviours of footballers during representative open-play situations.
Li, Haiquan; Dai, Xinbin; Zhao, Xuechun
2008-05-01
Membrane transport proteins play a crucial role in the import and export of ions, small molecules or macromolecules across biological membranes. Currently, there are a limited number of published computational tools which enable the systematic discovery and categorization of transporters prior to costly experimental validation. To approach this problem, we utilized a nearest neighbor method which seamlessly integrates homologous search and topological analysis into a machine-learning framework. Our approach satisfactorily distinguished 484 transporter families in the Transporter Classification Database, a curated and representative database for transporters. A five-fold cross-validation on the database achieved a positive classification rate of 72.3% on average. Furthermore, this method successfully detected transporters in seven model and four non-model organisms, ranging from archaean to mammalian species. A preliminary literature-based validation has cross-validated 65.8% of our predictions on the 11 organisms, including 55.9% of our predictions overlapping with 83.6% of the predicted transporters in TransportDB.
Kleikers, Pamela W M; Hooijmans, Carlijn; Göb, Eva; Langhauser, Friederike; Rewell, Sarah S J; Radermacher, Kim; Ritskes-Hoitinga, Merel; Howells, David W; Kleinschnitz, Christoph; Schmidt, Harald H H W
2015-08-27
Biomedical research suffers from a dramatically poor translational success. For example, in ischemic stroke, a condition with a high medical need, over a thousand experimental drug targets were unsuccessful. Here, we adopt methods from clinical research for a late-stage pre-clinical meta-analysis (MA) and randomized confirmatory trial (pRCT) approach. A profound body of literature suggests NOX2 to be a major therapeutic target in stroke. Systematic review and MA of all available NOX2(-/y) studies revealed a positive publication bias and lack of statistical power to detect a relevant reduction in infarct size. A fully powered multi-center pRCT rejects NOX2 as a target to improve neurofunctional outcomes or achieve a translationally relevant infarct size reduction. Thus stringent statistical thresholds, reporting negative data and a MA-pRCT approach can ensure biomedical data validity and overcome risks of bias.
DOT National Transportation Integrated Search
2018-01-11
Background: This study sought to systematically search the literature to identify reliable and valid survey instruments for fatigue measurement in the Emergency Medical Services (EMS) occupational setting. Methods: A systematic review study design wa...
Ma, Bin; Xu, Jia-Ke; Wu, Wen-Jing; Liu, Hong-Yan; Kou, Cheng-Kun; Liu, Na; Zhao, Lulu
2017-01-01
To investigate the awareness and use of the Systematic Review Center for Laboratory Animal Experimentation's (SYRCLE) risk-of-bias tool, the Animal Research: Reporting of In Vivo Experiments (ARRIVE) reporting guidelines, and Gold Standard Publication Checklist (GSPC) in China in basic medical researchers of animal experimental studies. A national questionnaire-based survey targeting basic medical researchers was carried in China to investigate the basic information and awareness of SYRCLE's risk of bias tool, ARRIVE guidelines, GSPC, and animal experimental bias risk control factors. The EpiData3.1 software was used for data entry, and Microsoft Excel 2013 was used for statistical analysis in this study. The number of cases (n) and percentage (%) of classified information were statistically described, and the comparison between groups (i.e., current students vs. research staff) was performed using chi-square test. A total of 298 questionnaires were distributed, and 272 responses were received, which included 266 valid questionnaires (from 118 current students and 148 research staff). Among the 266 survey participants, only 15.8% was aware of the SYRCLE's risk of bias tool, with significant difference between the two groups (P = 0.003), and the awareness rates of ARRIVE guidelines and GSPC were only 9.4% and 9.0%, respectively; 58.6% survey participants believed that the reports of animal experimental studies in Chinese literature were inadequate, with significant difference between the two groups (P = 0.004). In addition, only approximately 1/3 of the survey participants had read systematic reviews and meta-analysis reports of animal experimental studies; only 16/266 (6.0%) had carried out/participated in and 11/266 (4.1%) had published systematic reviews/meta-analysis of animal experimental studies. The awareness and use rates of SYRCLE's risk-of-bias tool, the ARRIVE guidelines, and the GSPC were low among Chinese basic medical researchers. Therefore, specific measures are necessary to promote and popularize these standards and specifications and to introduce these standards into guidelines of Chinese domestic journals as soon as possible to raise awareness and increase use rates of researchers and journal editors, thereby improving the quality of animal experimental methods and reports.
ERIC Educational Resources Information Center
Jonsson, Ulf; Olsson, Nora Choque; Bölte, Sven
2016-01-01
Systematic reviews have traditionally focused on internal validity, while external validity often has been overlooked. In this study, we systematically reviewed determinants of external validity in the accumulated randomized controlled trials of social skills group interventions for children and adolescents with autism spectrum disorder. We…
Experimental Evidence of Weak Excluded Volume Effects for Nanochannel Confined DNA
NASA Astrophysics Data System (ADS)
Gupta, Damini; Miller, Jeremy J.; Muralidhar, Abhiram; Mahshid, Sara; Reisner, Walter; Dorfman, Kevin D.
In the classical de Gennes picture of weak polymer nanochannel confinement, the polymer contour is envisioned as divided into a series of isometric blobs. Strong excluded volume interactions are present both within a blob and between blobs. In contrast, for semiflexible polymers like DNA, excluded volume interactions are of borderline strength within a blob but appreciable between blobs, giving rise to a chain description consisting of a string of anisometric blobs. We present experimental validation of this subtle effect of excluded volume for DNA nanochannel confinement by performing measurements of variance in chain extension of T4 DNA molecules as a function of effective nanochannel size (305-453 nm). Additionally, we show an approach to systematically reduce the effect of molecular weight dispersity of DNA samples, a typical experimental artifact, by combining confinement spectroscopy with simulations.
Rabbah, Jean-Pierre; Saikrishnan, Neelakantan; Yoganathan, Ajit P
2013-02-01
Numerical models of the mitral valve have been used to elucidate mitral valve function and mechanics. These models have evolved from simple two-dimensional approximations to complex three-dimensional fully coupled fluid structure interaction models. However, to date these models lack direct one-to-one experimental validation. As computational solvers vary considerably, experimental benchmark data are critically important to ensure model accuracy. In this study, a novel left heart simulator was designed specifically for the validation of numerical mitral valve models. Several distinct experimental techniques were collectively performed to resolve mitral valve geometry and hemodynamics. In particular, micro-computed tomography was used to obtain accurate and high-resolution (39 μm voxel) native valvular anatomy, which included the mitral leaflets, chordae tendinae, and papillary muscles. Three-dimensional echocardiography was used to obtain systolic leaflet geometry. Stereoscopic digital particle image velocimetry provided all three components of fluid velocity through the mitral valve, resolved every 25 ms in the cardiac cycle. A strong central filling jet (V ~ 0.6 m/s) was observed during peak systole with minimal out-of-plane velocities. In addition, physiologic hemodynamic boundary conditions were defined and all data were synchronously acquired through a central trigger. Finally, the simulator is a precisely controlled environment, in which flow conditions and geometry can be systematically prescribed and resultant valvular function and hemodynamics assessed. Thus, this work represents the first comprehensive database of high fidelity experimental data, critical for extensive validation of mitral valve fluid structure interaction simulations.
Rabbah, Jean-Pierre; Saikrishnan, Neelakantan; Yoganathan, Ajit P.
2012-01-01
Numerical models of the mitral valve have been used to elucidate mitral valve function and mechanics. These models have evolved from simple two-dimensional approximations to complex three-dimensional fully coupled fluid structure interaction models. However, to date these models lack direct one-to-one experimental validation. As computational solvers vary considerably, experimental benchmark data are critically important to ensure model accuracy. In this study, a novel left heart simulator was designed specifically for the validation of numerical mitral valve models. Several distinct experimental techniques were collectively performed to resolve mitral valve geometry and hemodynamics. In particular, micro-computed tomography was used to obtain accurate and high-resolution (39 µm voxel) native valvular anatomy, which included the mitral leaflets, chordae tendinae, and papillary muscles. Threedimensional echocardiography was used to obtain systolic leaflet geometry for direct comparison of resultant leaflet kinematics. Stereoscopic digital particle image velocimetry provided all three components of fluid velocity through the mitral valve, resolved every 25 ms in the cardiac cycle. A strong central filling jet was observed during peak systole, with minimal out-of-plane velocities (V~0.6m/s). In addition, physiologic hemodynamic boundary conditions were defined and all data were synchronously acquired through a central trigger. Finally, the simulator is a precisely controlled environment, in which flow conditions and geometry can be systematically prescribed and resultant valvular function and hemodynamics assessed. Thus, these data represent the first comprehensive database of high fidelity experimental data, critical for extensive validation of mitral valve fluid structure interaction simulations. PMID:22965640
Flux analysis and metabolomics for systematic metabolic engineering of microorganisms.
Toya, Yoshihiro; Shimizu, Hiroshi
2013-11-01
Rational engineering of metabolism is important for bio-production using microorganisms. Metabolic design based on in silico simulations and experimental validation of the metabolic state in the engineered strain helps in accomplishing systematic metabolic engineering. Flux balance analysis (FBA) is a method for the prediction of metabolic phenotype, and many applications have been developed using FBA to design metabolic networks. Elementary mode analysis (EMA) and ensemble modeling techniques are also useful tools for in silico strain design. The metabolome and flux distribution of the metabolic pathways enable us to evaluate the metabolic state and provide useful clues to improve target productivity. Here, we reviewed several computational applications for metabolic engineering by using genome-scale metabolic models of microorganisms. We also discussed the recent progress made in the field of metabolomics and (13)C-metabolic flux analysis techniques, and reviewed these applications pertaining to bio-production development. Because these in silico or experimental approaches have their respective advantages and disadvantages, the combined usage of these methods is complementary and effective for metabolic engineering. Copyright © 2013 Elsevier Inc. All rights reserved.
Leonidou, Chrysanthi; Panayiotou, Georgia
2018-08-01
According to the cognitive-behavioral model, illness anxiety is developed and maintained through biased processing of health-threatening information and maladaptive responses to such information. This study is a systematic review of research that attempted to validate central tenets of the cognitive-behavioral model regarding etiological and maintenance mechanisms in illness anxiety. Sixty-two studies, including correlational and experimental designs, were identified through a systematic search of databases and were evaluated for their quality. Outcomes were synthesized following a qualitative thematic approach under categories of theoretically driven mechanisms derived from the cognitive-behavioral model: attention, memory and interpretation biases, perceived awareness and inaccuracy in perception of somatic sensations, negativity bias, emotion dysregulation, and behavioral avoidance. Findings partly support the cognitive-behavioral model, but several of its hypothetical mechanisms only receive weak support due to the scarcity of relevant studies. Directions for future research are suggested based on identified gaps in the existing literature. Copyright © 2018 Elsevier Inc. All rights reserved.
[Effects of tai chi in postmenopausal women with osteoporosis: a systematic review].
Chang, Ting-Jung; Ting, Yu-Ting; Sheu, Shei-Lan; Chang, Hsiao-Yun
2014-10-01
Tai chi has been increasingly applied in osteoporosis patients. However, systematic reviews of the efficacy of this practice have been few and of limited scope. This study reviews previous experimental research work using tai chi as an intervention in postmenopausal women with osteoporosis and to appraise the reported research designs used, tai chi methods used, and outcomes. A systematic review method was used to search 14 databases for articles published between January 1980 and July 2013. Searched keywords included: "tai chi," "osteoporosis," and "postmenopausal women". The 2,458 articles initially identified were reduced to 4 valid articles based on considerations of criteria and repeatability. The 4 valid articles used either a randomized clinical trial (RCT) or a controlled clinical trial (CCT). They were further analyzed and synthesized in terms of common variables such as balance, muscle strength, and quality of life. Three of the 4 studies identified significant pretest / posttest differences in physiological aspects of quality of life in participants but did not obtain consistent results in terms of the psychological aspects. While reports identified a significant and positive tai chi effect on balance, they all used different measurements to do so. Only one of the four studies identified significant improvement in muscle strength. Therefore, this review could not identify clear support for the effectiveness of tai chi on balance or muscle strength. This review did not definitively support the positive effects of tai chi on balance, muscle strength, and quality of life in postmenopausal women with osteoporosis. The designs used in the tai chi interventions may be referenced for future studies. We suggest that future studies use data triangulation rather than a single-item tool to validate the research in order to cross-verify the same information. This may strengthen the research and increase the credibility and the validity of related findings.
Optimal test selection for prediction uncertainty reduction
Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel
2016-12-02
Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecisemore » data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.« less
NASA Astrophysics Data System (ADS)
Tian, Lin-Lin; Zhao, Ning; Song, Yi-Lei; Zhu, Chun-Ling
2018-05-01
This work is devoted to perform systematic sensitivity analysis of different turbulence models and various inflow boundary conditions in predicting the wake flow behind a horizontal axis wind turbine represented by an actuator disc (AD). The tested turbulence models are the standard k-𝜀 model and the Reynolds Stress Model (RSM). A single wind turbine immersed in both uniform flows and in modeled atmospheric boundary layer (ABL) flows is studied. Simulation results are validated against the field experimental data in terms of wake velocity and turbulence intensity.
Vesterinen, Hanna M; Vesterinen, Hanna V; Egan, Kieren; Deister, Amelie; Schlattmann, Peter; Macleod, Malcolm R; Dirnagl, Ulrich
2011-04-01
Translating experimental findings into clinically effective therapies is one of the major bottlenecks of modern medicine. As this has been particularly true for cerebrovascular research, attention has turned to the quality and validity of experimental cerebrovascular studies. We set out to assess the study design, statistical analyses, and reporting of cerebrovascular research. We assessed all original articles published in the Journal of Cerebral Blood Flow and Metabolism during the year 2008 against a checklist designed to capture the key attributes relating to study design, statistical analyses, and reporting. A total of 156 original publications were included (animal, in vitro, human). Few studies reported a primary research hypothesis, statement of purpose, or measures to safeguard internal validity (such as randomization, blinding, exclusion or inclusion criteria). Many studies lacked sufficient information regarding methods and results to form a reasonable judgment about their validity. In nearly 20% of studies, statistical tests were either not appropriate or information to allow assessment of appropriateness was lacking. This study identifies a number of factors that should be addressed if the quality of research in basic and translational biomedicine is to be improved. We support the widespread implementation of the ARRIVE (Animal Research Reporting In Vivo Experiments) statement for the reporting of experimental studies in biomedicine, for improving training in proper study design and analysis, and that reviewers and editors adopt a more constructively critical approach in the assessment of manuscripts for publication.
Vesterinen, Hanna V; Egan, Kieren; Deister, Amelie; Schlattmann, Peter; Macleod, Malcolm R; Dirnagl, Ulrich
2011-01-01
Translating experimental findings into clinically effective therapies is one of the major bottlenecks of modern medicine. As this has been particularly true for cerebrovascular research, attention has turned to the quality and validity of experimental cerebrovascular studies. We set out to assess the study design, statistical analyses, and reporting of cerebrovascular research. We assessed all original articles published in the Journal of Cerebral Blood Flow and Metabolism during the year 2008 against a checklist designed to capture the key attributes relating to study design, statistical analyses, and reporting. A total of 156 original publications were included (animal, in vitro, human). Few studies reported a primary research hypothesis, statement of purpose, or measures to safeguard internal validity (such as randomization, blinding, exclusion or inclusion criteria). Many studies lacked sufficient information regarding methods and results to form a reasonable judgment about their validity. In nearly 20% of studies, statistical tests were either not appropriate or information to allow assessment of appropriateness was lacking. This study identifies a number of factors that should be addressed if the quality of research in basic and translational biomedicine is to be improved. We support the widespread implementation of the ARRIVE (Animal Research Reporting In Vivo Experiments) statement for the reporting of experimental studies in biomedicine, for improving training in proper study design and analysis, and that reviewers and editors adopt a more constructively critical approach in the assessment of manuscripts for publication. PMID:21157472
Reliable Digit Span: A Systematic Review and Cross-Validation Study
ERIC Educational Resources Information Center
Schroeder, Ryan W.; Twumasi-Ankrah, Philip; Baade, Lyle E.; Marshall, Paul S.
2012-01-01
Reliable Digit Span (RDS) is a heavily researched symptom validity test with a recent literature review yielding more than 20 studies ranging in dates from 1994 to 2011. Unfortunately, limitations within some of the research minimize clinical generalizability. This systematic review and cross-validation study was conducted to address these…
Validation of virtual learning object to support the teaching of nursing care systematization.
Salvador, Pétala Tuani Candido de Oliveira; Mariz, Camila Maria Dos Santos; Vítor, Allyne Fortes; Ferreira Júnior, Marcos Antônio; Fernandes, Maria Isabel Domingues; Martins, José Carlos Amado; Santos, Viviane Euzébia Pereira
2018-01-01
to describe the content validation process of a Virtual Learning Object to support the teaching of nursing care systematization to nursing professionals. methodological study, with quantitative approach, developed according to the methodological reference of Pasquali's psychometry and conducted from March to July 2016, from two-stage Delphi procedure. in the Delphi 1 stage, eight judges evaluated the Virtual Object; in Delphi 2 stage, seven judges evaluated it. The seven screens of the Virtual Object were analyzed as to the suitability of its contents. The Virtual Learning Object to support the teaching of nursing care systematization was considered valid in its content, with a Total Content Validity Coefficient of 0.96. it is expected that the Virtual Object can support the teaching of nursing care systematization in light of appropriate and effective pedagogical approaches.
Boysen, Guy A; VanBergen, Alexandra
2014-02-01
Dissociative Identity Disorder (DID) has long been surrounded by controversy due to disagreement about its etiology and the validity of its associated phenomena. Researchers have conducted studies comparing people diagnosed with DID and people simulating DID in order to better understand the disorder. The current research presents a systematic review of this DID simulation research. The literature consists of 20 studies and contains several replicated findings. Replicated differences between the groups include symptom presentation, identity presentation, and cognitive processing deficits. Replicated similarities between the groups include interidentity transfer of information as shown by measures of recall, recognition, and priming. Despite some consistent findings, this research literature is hindered by methodological flaws that reduce experimental validity. Copyright © 2013 Elsevier Ltd. All rights reserved.
Wavelet-based identification of rotor blades in passage-through-resonance tests
NASA Astrophysics Data System (ADS)
Carassale, Luigi; Marrè-Brunenghi, Michela; Patrone, Stefano
2018-01-01
Turbine blades are critical components in turbo engines and their design process usually includes experimental tests in order to validate and/or update numerical models. These tests are generally carried out on full-scale rotors having some blades instrumented with strain gauges and usually involve a run-up or a run-down phase. The quantification of damping in these conditions is rather challenging for several reasons. In this work, we show through numerical simulations that the usual identification procedures lead to a systematic overestimation of damping due both to the finite sweep velocity, as well as to the variation of the blade natural frequencies with the rotation speed. To overcome these problems, an identification procedure based on the continuous wavelet transform is proposed and validated through numerical simulation.
Optimal cooperative control synthesis of active displays
NASA Technical Reports Server (NTRS)
Garg, S.; Schmidt, D. K.
1985-01-01
A technique is developed that is intended to provide a systematic approach to synthesizing display augmentation for optimal manual control in complex, closed-loop tasks. A cooperative control synthesis technique, previously developed to design pilot-optimal control augmentation for the plant, is extended to incorporate the simultaneous design of performance enhancing displays. The technique utilizes an optimal control model of the man in the loop. It is applied to the design of a quickening control law for a display and a simple K/s(2) plant, and then to an F-15 type aircraft in a multi-channel task. Utilizing the closed loop modeling and analysis procedures, the results from the display design algorithm are evaluated and an analytical validation is performed. Experimental validation is recommended for future efforts.
A heuristic mathematical model for the dynamics of sensory conflict and motion sickness
NASA Technical Reports Server (NTRS)
Oman, C. M.
1982-01-01
By consideration of the information processing task faced by the central nervous system in estimating body spatial orientation and in controlling active body movement using an internal model referenced control strategy, a mathematical model for sensory conflict generation is developed. The model postulates a major dynamic functional role for sensory conflict signals in movement control, as well as in sensory-motor adaptation. It accounts for the role of active movement in creating motion sickness symptoms in some experimental circumstance, and in alleviating them in others. The relationship between motion sickness produced by sensory rearrangement and that resulting from external motion disturbances is explicitly defined. A nonlinear conflict averaging model is proposed which describes dynamic aspects of experimentally observed subjective discomfort sensation, and suggests resulting behaviours. The model admits several possibilities for adaptive mechanisms which do not involve internal model updating. Further systematic efforts to experimentally refine and validate the model are indicated.
NASA Technical Reports Server (NTRS)
Geng, Steven M.
1987-01-01
A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Reasonable agreement was obtained between the code prediction and the experimental data over a wide range of engine operating conditions.
NASA Technical Reports Server (NTRS)
Geng, Steven M.
1987-01-01
A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Resonable agreement was obtained between the code predictions and the experimental data over a wide range of engine operating conditions.
Statistical Methodologies to Integrate Experimental and Computational Research
NASA Technical Reports Server (NTRS)
Parker, P. A.; Johnson, R. T.; Montgomery, D. C.
2008-01-01
Development of advanced algorithms for simulating engine flow paths requires the integration of fundamental experiments with the validation of enhanced mathematical models. In this paper, we provide an overview of statistical methods to strategically and efficiently conduct experiments and computational model refinement. Moreover, the integration of experimental and computational research efforts is emphasized. With a statistical engineering perspective, scientific and engineering expertise is combined with statistical sciences to gain deeper insights into experimental phenomenon and code development performance; supporting the overall research objectives. The particular statistical methods discussed are design of experiments, response surface methodology, and uncertainty analysis and planning. Their application is illustrated with a coaxial free jet experiment and a turbulence model refinement investigation. Our goal is to provide an overview, focusing on concepts rather than practice, to demonstrate the benefits of using statistical methods in research and development, thereby encouraging their broader and more systematic application.
Simplified model of pinhole imaging for quantifying systematic errors in image shape
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benedetti, Laura Robin; Izumi, N.; Khan, S. F.
In this paper, we examine systematic errors in x-ray imaging by pinhole optics for quantifying uncertainties in the measurement of convergence and asymmetry in inertial confinement fusion implosions. We present a quantitative model for the total resolution of a pinhole optic with an imaging detector that more effectively describes the effect of diffraction than models that treat geometry and diffraction as independent. This model can be used to predict loss of shape detail due to imaging across the transition from geometric to diffractive optics. We find that fractional error in observable shapes is proportional to the total resolution element wemore » present and inversely proportional to the length scale of the asymmetry being observed. Finally, we have experimentally validated our results by imaging a single object with differently sized pinholes and with different magnifications.« less
Simplified model of pinhole imaging for quantifying systematic errors in image shape
Benedetti, Laura Robin; Izumi, N.; Khan, S. F.; ...
2017-10-30
In this paper, we examine systematic errors in x-ray imaging by pinhole optics for quantifying uncertainties in the measurement of convergence and asymmetry in inertial confinement fusion implosions. We present a quantitative model for the total resolution of a pinhole optic with an imaging detector that more effectively describes the effect of diffraction than models that treat geometry and diffraction as independent. This model can be used to predict loss of shape detail due to imaging across the transition from geometric to diffractive optics. We find that fractional error in observable shapes is proportional to the total resolution element wemore » present and inversely proportional to the length scale of the asymmetry being observed. Finally, we have experimentally validated our results by imaging a single object with differently sized pinholes and with different magnifications.« less
Smerecnik, Chris M R; Mesters, Ilse; Candel, Math J J M; De Vries, Hein; De Vries, Nanne K
2012-01-01
The role of information processing in understanding people's responses to risk information has recently received substantial attention. One limitation of this research concerns the unavailability of a validated questionnaire of information processing. This article presents two studies in which we describe the development and validation of the Information-Processing Questionnaire to meet that need. Study 1 describes the development and initial validation of the questionnaire. Participants were randomized to either a systematic processing or a heuristic processing condition after which they completed a manipulation check and the initial 15-item questionnaire and again two weeks later. The questionnaire was subjected to factor reliability and validity analyses on both measurement times for purposes of cross-validation of the results. A two-factor solution was observed representing a systematic processing and a heuristic processing subscale. The resulting scale showed good reliability and validity, with the systematic condition scoring significantly higher on the systematic subscale and the heuristic processing condition significantly higher on the heuristic subscale. Study 2 sought to further validate the questionnaire in a field study. Results of the second study corresponded with those of Study 1 and provided further evidence of the validity of the Information-Processing Questionnaire. The availability of this information-processing scale will be a valuable asset for future research and may provide researchers with new research opportunities. © 2011 Society for Risk Analysis.
Numerical and Experimental Study of Wake Redirection Techniques in a Boundary Layer Wind Tunnel
NASA Astrophysics Data System (ADS)
Wang, J.; Foley, S.; Nanos, E. M.; Yu, T.; Campagnolo, F.; Bottasso, C. L.; Zanotti, A.; Croce, A.
2017-05-01
The aim of the present paper is to validate a wind farm LES framework in the context of two distinct wake redirection techniques: yaw misalignment and individual cyclic pitch control. A test campaign was conducted using scaled wind turbine models in a boundary layer wind tunnel, where both particle image velocimetry and hot-wire thermo anemometers were used to obtain high quality measurements of the downstream flow. A LiDAR system was also employed to determine the non-uniformity of the inflow velocity field. A high-fidelity large-eddy simulation lifting-line model was used to simulate the aerodynamic behavior of the system, including the geometry of the wind turbine nacelle and tower. A tuning-free Lagrangian scale-dependent dynamic approach was adopted to improve the sub-grid scale modeling. Comparisons with experimental measurements are used to systematically validate the simulations. The LES results are in good agreement with the PIV and hot-wire data in terms of time-averaged wake profiles, turbulence intensity and Reynolds shear stresses. Discrepancies are also highlighted, to guide future improvements.
Uncertainty Analysis in 3D Equilibrium Reconstruction
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
2018-02-21
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
Uncertainty Analysis in 3D Equilibrium Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
The risk of bias in systematic reviews tool showed fair reliability and good construct validity.
Bühn, Stefanie; Mathes, Tim; Prengel, Peggy; Wegewitz, Uta; Ostermann, Thomas; Robens, Sibylle; Pieper, Dawid
2017-11-01
There is a movement from generic quality checklists toward a more domain-based approach in critical appraisal tools. This study aimed to report on a first experience with the newly developed risk of bias in systematic reviews (ROBIS) tool and compare it with A Measurement Tool to Assess Systematic Reviews (AMSTAR), that is, the most common used tool to assess methodological quality of systematic reviews while assessing validity, reliability, and applicability. Validation study with four reviewers based on 16 systematic reviews in the field of occupational health. Interrater reliability (IRR) of all four raters was highest for domain 2 (Fleiss' kappa κ = 0.56) and lowest for domain 4 (κ = 0.04). For ROBIS, median IRR was κ = 0.52 (range 0.13-0.88) for the experienced pair of raters compared to κ = 0.32 (range 0.12-0.76) for the less experienced pair of raters. The percentage of "yes" scores of each review of ROBIS ratings was strongly correlated with the AMSTAR ratings (r s = 0.76; P = 0.01). ROBIS has fair reliability and good construct validity to assess the risk of bias in systematic reviews. More validation studies are needed to investigate reliability and applicability, in particular. Copyright © 2017 Elsevier Inc. All rights reserved.
Systematic heterogenization for better reproducibility in animal experimentation.
Richter, S Helene
2017-08-31
The scientific literature is full of articles discussing poor reproducibility of findings from animal experiments as well as failures to translate results from preclinical animal studies to clinical trials in humans. Critics even go so far as to talk about a "reproducibility crisis" in the life sciences, a novel headword that increasingly finds its way into numerous high-impact journals. Viewed from a cynical perspective, Fett's law of the lab "Never replicate a successful experiment" has thus taken on a completely new meaning. So far, poor reproducibility and translational failures in animal experimentation have mostly been attributed to biased animal data, methodological pitfalls, current publication ethics and animal welfare constraints. More recently, the concept of standardization has also been identified as a potential source of these problems. By reducing within-experiment variation, rigorous standardization regimes limit the inference to the specific experimental conditions. In this way, however, individual phenotypic plasticity is largely neglected, resulting in statistically significant but possibly irrelevant findings that are not reproducible under slightly different conditions. By contrast, systematic heterogenization has been proposed as a concept to improve representativeness of study populations, contributing to improved external validity and hence improved reproducibility. While some first heterogenization studies are indeed very promising, it is still not clear how this approach can be transferred into practice in a logistically feasible and effective way. Thus, further research is needed to explore different heterogenization strategies as well as alternative routes toward better reproducibility in animal experimentation.
A systematic review of validated sinus surgery simulators.
Stew, B; Kao, S S-T; Dharmawardana, N; Ooi, E H
2018-06-01
Simulation provides a safe and effective opportunity to develop surgical skills. A variety of endoscopic sinus surgery (ESS) simulators has been described in the literature. Validation of these simulators allows for effective utilisation in training. To conduct a systematic review of the published literature to analyse the evidence for validated ESS simulation. Pubmed, Embase, Cochrane and Cinahl were searched from inception of the databases to 11 January 2017. Twelve thousand five hundred and sixteen articles were retrieved of which 10 112 were screened following the removal of duplicates. Thirty-eight full-text articles were reviewed after meeting search criteria. Evidence of face, content, construct, discriminant and predictive validity was extracted. Twenty articles were included in the analysis describing 12 ESS simulators. Eleven of these simulators had undergone validation: 3 virtual reality, 7 physical bench models and 1 cadaveric simulator. Seven of the simulators were shown to have face validity, 7 had construct validity and 1 had predictive validity. None of the simulators demonstrated discriminate validity. This systematic review demonstrates that a number of ESS simulators have been comprehensively validated. Many of the validation processes, however, lack standardisation in outcome reporting, thus limiting a meta-analysis comparison between simulators. © 2017 John Wiley & Sons Ltd.
Schlosser, Ralf W; Belfiore, Phillip J; Sigafoos, Jeff; Briesch, Amy M; Wendt, Oliver
2018-05-28
Evidence-based practice as a process requires the appraisal of research as a critical step. In the field of developmental disabilities, single-case experimental designs (SCEDs) figure prominently as a means for evaluating the effectiveness of non-reversible instructional interventions. Comparative SCEDs contrast two or more instructional interventions to document their relative effectiveness and efficiency. As such, these designs have great potential to inform evidence-based decision-making. To harness this potential, however, interventionists and authors of systematic reviews need tools to appraise the evidence generated by these designs. Our literature review revealed that existing tools do not adequately address the specific methodological considerations of comparative SCEDs that aim to compare instructional interventions of non-reversible target behaviors. The purpose of this paper is to introduce the Comparative Single-Case Experimental Design Rating System (CSCEDARS, "cedars") as a tool for appraising the internal validity of comparative SCEDs of two or more non-reversible instructional interventions. Pertinent literature will be reviewed to establish the need for this tool and to underpin the rationales for individual rating items. Initial reliability information will be provided as well. Finally, directions for instrument validation will be proposed. Copyright © 2018 Elsevier Ltd. All rights reserved.
Le Roux, E; Mellerio, H; Guilmin-Crépon, S; Gottot, S; Jacquin, P; Boulkedid, R; Alberti, C
2017-01-27
To explore the methodologies employed in studies assessing transition of care interventions, with the aim of defining goals for the improvement of future studies. Systematic review of comparative studies assessing transition to adult care interventions for young people with chronic conditions. MEDLINE, EMBASE, ClinicalTrial.gov. 2 reviewers screened comparative studies with experimental and quasi-experimental designs, published or registered before July 2015. Eligible studies evaluate transition interventions at least in part after transfer to adult care of young people with chronic conditions with at least one outcome assessed quantitatively. 39 studies were reviewed, 26/39 (67%) published their final results and 13/39 (33%) were in progress. In 9 studies (9/39, 23%) comparisons were made between preintervention and postintervention in a single group. Randomised control groups were used in 9/39 (23%) studies. 2 (2/39, 5%) reported blinding strategies. Use of validated questionnaires was reported in 28% (11/39) of studies. In terms of reporting in published studies 15/26 (58%) did not report age at transfer, and 6/26 (23%) did not report the time of collection of each outcome. Few evaluative studies exist and their level of methodological quality is variable. The complexity of interventions, multiplicity of outcomes, difficulty of blinding and the small groups of patients have consequences on concluding on the effectiveness of interventions. The evaluation of the transition interventions requires an appropriate and common methodology which will provide access to a better level of evidence. We identified areas for improvement in terms of randomisation, recruitment and external validity, blinding, measurement validity, standardised assessment and reporting. Improvements will increase our capacity to determine effective interventions for transition care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
THz-waves channeling in a monolithic saddle-coil for Dynamic Nuclear Polarization enhanced NMR
NASA Astrophysics Data System (ADS)
Macor, A.; de Rijk, E.; Annino, G.; Alberti, S.; Ansermet, J.-Ph.
2011-10-01
A saddle coil manufactured by electric discharge machining (EDM) from a solid piece of copper has recently been realized at EPFL for Dynamic Nuclear Polarization enhanced Nuclear Magnetic Resonance experiments (DNP-NMR) at 9.4 T. The corresponding electromagnetic behavior of radio-frequency (400 MHz) and THz (263 GHz) waves were studied by numerical simulation in various measurement configurations. Moreover, we present an experimental method by which the results of the THz-wave numerical modeling are validated. On the basis of the good agreement between numerical and experimental results, we conducted by numerical simulation a systematic analysis on the influence of the coil geometry and of the sample properties on the THz-wave field, which is crucial in view of the optimization of DNP-NMR in solids.
ERIC Educational Resources Information Center
Hatala, Rose; Cook, David A.; Brydges, Ryan; Hawkins, Richard
2015-01-01
In order to construct and evaluate the validity argument for the Objective Structured Assessment of Technical Skills (OSATS), based on Kane's framework, we conducted a systematic review. We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, Scopus, and selected reference lists through February 2013. Working in duplicate, we selected…
ERIC Educational Resources Information Center
Mihura, Joni L.; Meyer, Gregory J.; Dumitrascu, Nicolae; Bombel, George
2013-01-01
We systematically evaluated the peer-reviewed Rorschach validity literature for the 65 main variables in the popular Comprehensive System (CS). Across 53 meta-analyses examining variables against externally assessed criteria (e.g., observer ratings, psychiatric diagnosis), the mean validity was r = 0.27 (k = 770) as compared to r = 0.08 (k = 386)…
Nuclear Energy Knowledge and Validation Center (NEKVaC) Needs Workshop Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gougar, Hans
2015-02-01
The Department of Energy (DOE) has made significant progress developing simulation tools to predict the behavior of nuclear systems with greater accuracy and of increasing our capability to predict the behavior of these systems outside of the standard range of applications. These analytical tools require a more complex array of validation tests to accurately simulate the physics and multiple length and time scales. Results from modern simulations will allow experiment designers to narrow the range of conditions needed to bound system behavior and to optimize the deployment of instrumentation to limit the breadth and cost of the campaign. Modern validation,more » verification and uncertainty quantification (VVUQ) techniques enable analysts to extract information from experiments in a systematic manner and provide the users with a quantified uncertainty estimate. Unfortunately, the capability to perform experiments that would enable taking full advantage of the formalisms of these modern codes has progressed relatively little (with some notable exceptions in fuels and thermal-hydraulics); the majority of the experimental data available today is the "historic" data accumulated over the last decades of nuclear systems R&D. A validated code-model is a tool for users. An unvalidated code-model is useful for code developers to gain understanding, publish research results, attract funding, etc. As nuclear analysis codes have become more sophisticated, so have the measurement and validation methods and the challenges that confront them. A successful yet cost-effective validation effort requires expertise possessed only by a few, resources possessed only by the well-capitalized (or a willing collective), and a clear, well-defined objective (validating a code that is developed to satisfy the need(s) of an actual user). To that end, the Idaho National Laboratory established the Nuclear Energy Knowledge and Validation Center to address the challenges of modern code validation and to manage the knowledge from past, current, and future experimental campaigns. By pulling together the best minds involved in code development, experiment design, and validation to establish and disseminate best practices and new techniques, the Nuclear Energy Knowledge and Validation Center (NEKVaC or the ‘Center’) will be a resource for industry, DOE Programs, and academia validation efforts.« less
Knopp, K L; Stenfors, C; Baastrup, C; Bannon, A W; Calvo, M; Caspani, O; Currie, G; Finnerup, N B; Huang, W; Kennedy, J D; Lefevre, I; Machin, I; Macleod, M; Rees, H; Rice, A S C; Rutten, K; Segerdahl, M; Serra, J; Wodarski, R; Berge, O-G; Treedef, R-D
2017-12-29
Background and aims Pain is a subjective experience, and as such, pre-clinical models of human pain are highly simplified representations of clinical features. These models are nevertheless critical for the delivery of novel analgesics for human pain, providing pharmacodynamic measurements of activity and, where possible, on-target confirmation of that activity. It has, however, been suggested that at least 50% of all pre-clinical data, independent of discipline, cannot be replicated. Additionally, the paucity of "negative" data in the public domain indicates a publication bias, and significantly impacts the interpretation of failed attempts to replicate published findings. Evidence suggests that systematic biases in experimental design and conduct and insufficiencies in reporting play significant roles in poor reproducibility across pre-clinical studies. It then follows that recommendations on how to improve these factors are warranted. Methods Members of Europain, a pain research consortium funded by the European Innovative Medicines Initiative (IMI), developed internal recommendations on how to improve the reliability of pre-clinical studies between laboratories. This guidance is focused on two aspects: experimental design and conduct, and study reporting. Results Minimum requirements for experimental design and conduct were agreed upon across the dimensions of animal characteristics, sample size calculations, inclusion and exclusion criteria, random allocation to groups, allocation concealment, and blinded assessment of outcome. Building upon the Animals in Research: Reportingin vivo Experiments (ARRIVE) guidelines, reporting standards were developed for pre-clinical studies of pain. These include specific recommendations for reporting on ethical issues, experimental design and conduct, and data analysis and interpretation. Key principles such as sample size calculation, a priori definition of a primary efficacy measure, randomization, allocation concealments, and blinding are discussed. In addition, considerations of how stress and normal rodent physiology impact outcome of analgesic drug studies are considered. Flow diagrams are standard requirements in all clinical trials, and flow diagrams for preclinical trials, which describe number of animals included/excluded, and reasons for exclusion are proposed. Creation of a trial registry for pre-clinical studies focused on drug development in order to estimate possible publication bias is discussed. Conclusions More systematic research is needed to analyze how inadequate internal validity and/or experimental bias may impact reproducibility across pre-clinical pain studies. Addressing the potential threats to internal validity and the sources of experimental biases, as well as increasing the transparency in reporting, are likely to improve preclinical research broadly by ensuring relevant progress is made in advancing the knowledge of chronic pain pathophysiology and identifying novel analgesics. Implications We are now disseminating these Europain processes for discussion in the wider pain research community. Any benefit from these guidelines will be dependent on acceptance and disciplined implementation across pre-clinical laboratories, funding agencies and journal editors, but it is anticipated that these guidelines will be a first step towards improving scientific rigor across the field of pre-clinical pain research.
Valstad, Mathias; Alvares, Gail A; Egknud, Maiken; Matziorinis, Anna Maria; Andreassen, Ole A; Westlye, Lars T; Quintana, Daniel S
2017-07-01
There is growing interest in the role of the oxytocin system in social cognition and behavior. Peripheral oxytocin concentrations are regularly used to approximate central concentrations in psychiatric research, however, the validity of this approach is unclear. Here we conducted a pre-registered systematic search and meta-analysis of correlations between central and peripheral oxytocin concentrations. A search of databases yielded 17 eligible studies, resulting in a total sample size of 516 participants and subjects. Overall, a positive association between central and peripheral oxytocin concentrations was revealed [r=0.29, 95% CI (0.14, 0.42), p<0.0001]. This association was moderated by experimental context [Q b (4), p=0.003]. While no association was observed under basal conditions (r=0.08, p=0.31), significant associations were observed after intranasal oxytocin administration (r=0.66, p<0.0001), and after experimentally induced stress (r=0.49, p=0.001). These results indicate a coordination of central and peripheral oxytocin release after stress and after intranasal administration. Although popular, the approach of using peripheral oxytocin levels to approximate central levels under basal conditions is not supported by the present results. Copyright © 2017 Elsevier Ltd. All rights reserved.
HZETRN radiation transport validation using balloon-based experimental data
NASA Astrophysics Data System (ADS)
Warner, James E.; Norman, Ryan B.; Blattnig, Steve R.
2018-05-01
The deterministic radiation transport code HZETRN (High charge (Z) and Energy TRaNsport) was developed by NASA to study the effects of cosmic radiation on astronauts and instrumentation shielded by various materials. This work presents an analysis of computed differential flux from HZETRN compared with measurement data from three balloon-based experiments over a range of atmospheric depths, particle types, and energies. Model uncertainties were quantified using an interval-based validation metric that takes into account measurement uncertainty both in the flux and the energy at which it was measured. Average uncertainty metrics were computed for the entire dataset as well as subsets of the measurements (by experiment, particle type, energy, etc.) to reveal any specific trends of systematic over- or under-prediction by HZETRN. The distribution of individual model uncertainties was also investigated to study the range and dispersion of errors beyond just single scalar and interval metrics. The differential fluxes from HZETRN were generally well-correlated with balloon-based measurements; the median relative model difference across the entire dataset was determined to be 30%. The distribution of model uncertainties, however, revealed that the range of errors was relatively broad, with approximately 30% of the uncertainties exceeding ± 40%. The distribution also indicated that HZETRN systematically under-predicts the measurement dataset as a whole, with approximately 80% of the relative uncertainties having negative values. Instances of systematic bias for subsets of the data were also observed, including a significant underestimation of alpha particles and protons for energies below 2.5 GeV/u. Muons were found to be systematically over-predicted at atmospheric depths deeper than 50 g/cm2 but under-predicted for shallower depths. Furthermore, a systematic under-prediction of alpha particles and protons was observed below the geomagnetic cutoff, suggesting that improvements to the light ion production cross sections in HZETRN should be investigated.
Systematic reviews of animal studies; missing link in translational research?
van Luijk, Judith; Bakker, Brenda; Rovers, Maroeska M; Ritskes-Hoitinga, Merel; de Vries, Rob B M; Leenaars, Marlies
2014-01-01
The methodological quality of animal studies is an important factor hampering the translation of results from animal studies to a clinical setting. Systematic reviews of animal studies may provide a suitable method to assess and thereby improve their methodological quality. The aims of this study were: 1) to evaluate the risk of bias assessment in animal-based systematic reviews, and 2) to study the internal validity of the primary animal studies included in these systematic reviews. We systematically searched Pubmed and Embase for SRs of preclinical animal studies published between 2005 and 2012. A total of 91 systematic reviews met our inclusion criteria. The risk of bias was assessed in 48 (52.7%) of these 91 systematic reviews. Thirty-three (36.3%) SRs provided sufficient information to evaluate the internal validity of the included studies. Of the evaluated primary studies, 24.6% was randomized, 14.6% reported blinding of the investigator/caretaker, 23.9% blinded the outcome assessment, and 23.1% reported drop-outs. To improve the translation of animal data to clinical practice, systematic reviews of animal studies are worthwhile, but the internal validity of primary animal studies needs to be improved. Furthermore, risk of bias should be assessed by systematic reviews of animal studies to provide insight into the reliability of the available evidence.
Ion-induced temperature rise in various types of insulators
NASA Astrophysics Data System (ADS)
Szenes, G.
2015-03-01
Swift heavy ions induce a Gaussian temperature distribution Θ(r) in insulators which depend neither on the physical properties of the solid nor on the kind of the projectiles. In this paper, we show that all experimental data suitable for analysis confirm the validity of Θ(r). The same result is obtained for ZrSiO4, MgAl2O4, KTiOPO4, Al2O3 and Y2O3, where systematic experiments have not been performed yet. The analysis demonstrates that Θ(r) may be valid in biomolecular targets and in high-Tc superconductors as well. The Fourier equation cannot reproduce the relation Θ(r); thus, it is not suitable for the estimation of the ion-induced temperatures. The consequences of the uniformity in track formation must also affect other radiation-induced effects.
ERIC Educational Resources Information Center
Benner, Gregory J.; Uhing, Brad M.; Pierce, Corey D.; Beaudoin, Kathleen M.; Ralston, Nicole C.; Mooney, Paul
2009-01-01
We sought to extend instrument validation research for the Systematic Screening for Behavior Disorders (SSBD) (Walker & Severson, 1990) using convergent validation techniques. Associations between Critical Events, Adaptive Behavior, and Maladaptive Behavior indices of the SSBD were examined in relation to syndrome, broadband, and total scores…
49 CFR Appendix B to Part 222 - Alternative Safety Measures
Code of Federal Regulations, 2014 CFR
2014-10-01
... statistically valid baseline violation rate must be established through automated or systematic manual... enforcement, a program of public education and awareness directed at motor vehicle drivers, pedestrians and..., a statistically valid baseline violation rate must be established through automated or systematic...
49 CFR Appendix B to Part 222 - Alternative Safety Measures
Code of Federal Regulations, 2013 CFR
2013-10-01
... statistically valid baseline violation rate must be established through automated or systematic manual... enforcement, a program of public education and awareness directed at motor vehicle drivers, pedestrians and..., a statistically valid baseline violation rate must be established through automated or systematic...
Systematic Experimental Designs For Mixed-species Plantings
Jeffery C. Goelz
2001-01-01
Systematic experimental designs provide splendid demonstration areas for scientists and land managers to observe the effects of a gradient of species composition. Systematic designs are based on large plots where species composition varies gradually. Systematic designs save considerable space and require many fewer seedlings than conventional mixture designs. One basic...
Viscosity and diffusivity in melts: from unary to multicomponent systems
NASA Astrophysics Data System (ADS)
Chen, Weimin; Zhang, Lijun; Du, Yong; Huang, Baiyun
2014-05-01
Viscosity and diffusivity, two important transport coefficients, are systematically investigated from unary melt to binary to multicomponent melts in the present work. By coupling with Kaptay's viscosity equation of pure liquid metals and effective radii of diffusion species, the Sutherland equation is modified by taking the size effect into account, and further derived into an Arrhenius formula for the convenient usage. Its reliability for predicting self-diffusivity and impurity diffusivity in unary liquids is then validated by comparing the calculated self-diffusivities and impurity diffusivities in liquid Al- and Fe-based alloys with the experimental and the assessed data. Moreover, the Kozlov model was chosen among various viscosity models as the most reliable one to reproduce the experimental viscosities in binary and multicomponent melts. Based on the reliable viscosities calculated from the Kozlov model, the modified Sutherland equation is utilized to predict the tracer diffusivities in binary and multicomponent melts, and validated in Al-Cu, Al-Ni and Al-Ce-Ni melts. Comprehensive comparisons between the calculated results and the literature data indicate that the experimental tracer diffusivities and the theoretical ones can be well reproduced by the present calculations. In addition, the vacancy-wind factor in binary liquid Al-Ni alloys with the increasing temperature is also discussed. What's more, the calculated inter-diffusivities in liquid Al-Cu, Al-Ni and Al-Ag-Cu alloys are also in excellent agreement with the measured and theoretical data. Comparisons between the simulated concentration profiles and the measured ones in Al-Cu, Al-Ce-Ni and Al-Ag-Cu melts are further used to validate the present calculation method.
Validation of reference genes for quantitative gene expression analysis in experimental epilepsy.
Sadangi, Chinmaya; Rosenow, Felix; Norwood, Braxton A
2017-12-01
To grasp the molecular mechanisms and pathophysiology underlying epilepsy development (epileptogenesis) and epilepsy itself, it is important to understand the gene expression changes that occur during these phases. Quantitative real-time polymerase chain reaction (qPCR) is a technique that rapidly and accurately determines gene expression changes. It is crucial, however, that stable reference genes are selected for each experimental condition to ensure that accurate values are obtained for genes of interest. If reference genes are unstably expressed, this can lead to inaccurate data and erroneous conclusions. To date, epilepsy studies have used mostly single, nonvalidated reference genes. This is the first study to systematically evaluate reference genes in male Sprague-Dawley rat models of epilepsy. We assessed 15 potential reference genes in hippocampal tissue obtained from 2 different models during epileptogenesis, 1 model during chronic epilepsy, and a model of noninjurious seizures. Reference gene ranking varied between models and also differed between epileptogenesis and chronic epilepsy time points. There was also some variance between the four mathematical models used to rank reference genes. Notably, we found novel reference genes to be more stably expressed than those most often used in experimental epilepsy studies. The consequence of these findings is that reference genes suitable for one epilepsy model may not be appropriate for others and that reference genes can change over time. It is, therefore, critically important to validate potential reference genes before using them as normalizing factors in expression analysis in order to ensure accurate, valid results. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Starke, R.; Schober, G. A. H.
2018-03-01
We provide a systematic theoretical, experimental, and historical critique of the standard derivation of Fresnel's equations, which shows in particular that these well-established equations actually contradict the traditional, macroscopic approach to electrodynamics in media. Subsequently, we give a rederivation of Fresnel's equations which is exclusively based on the microscopic Maxwell equations and hence in accordance with modern first-principles materials physics. In particular, as a main outcome of this analysis being of a more general interest, we propose the most general boundary conditions on electric and magnetic fields which are valid on the microscopic level.
Lohse, Keith R; Pathania, Anupriya; Wegman, Rebecca; Boyd, Lara A; Lang, Catherine E
2018-03-01
To use the Centralized Open-Access Rehabilitation database for Stroke to explore reporting of both experimental and control interventions in randomized controlled trials for stroke rehabilitation (including upper and lower extremity therapies). The Centralized Open-Access Rehabilitation database for Stroke was created from a search of MEDLINE, Embase, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and Cumulative Index of Nursing and Allied Health from the earliest available date to May 31, 2014. A total of 2892 titles were reduced to 514 that were screened by full text. This screening left 215 randomized controlled trials in the database (489 independent groups representing 12,847 patients). Using a mixture of qualitative and quantitative methods, we performed a text-based analysis of how the procedures of experimental and control therapies were described. Experimental and control groups were rated by 2 independent coders according to the Template for Intervention Description and Replication criteria. Linear mixed-effects regression with a random effect of study (groups nested within studies) showed that experimental groups had statistically more words in their procedures (mean, 271.8 words) than did control groups (mean, 154.8 words) (P<.001). Experimental groups had statistically more references in their procedures (mean, 1.60 references) than did control groups (mean, .82 references) (P<.001). Experimental groups also scored significantly higher on the total Template for Intervention Description and Replication checklist (mean score, 7.43 points) than did control groups (mean score, 5.23 points) (P<.001). Control treatments in stroke motor rehabilitation trials are underdescribed relative to experimental treatments. These poor descriptions are especially problematic for "conventional" therapy control groups. Poor reporting is a threat to the internal validity and generalizability of clinical trial results. We recommend authors use preregistered protocols and established reporting criteria to improve transparency. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Experimental validation of depletion calculations with VESTA 2.1.5 using JEFF-3.2
NASA Astrophysics Data System (ADS)
Haeck, Wim; Ichou, Raphaëlle
2017-09-01
The removal of decay heat is a significant safety concern in nuclear engineering for the operation of a nuclear reactor both in normal and accidental conditions and for intermediate and long term waste storage facilities. The correct evaluation of the decay heat produced by an irradiated material requires first of all the calculation of the composition of the irradiated material by depletion codes such as VESTA 2.1, currently under development at IRSN in France. A set of PWR assembly decay heat measurements performed by the Swedish Central Interim Storage Facility (CLAB) located in Oskarshamm (Sweden) have been calculated using different nuclear data libraries: ENDF/B-VII.0, JEFF-3.1, JEFF-3.2 and JEFF-3.3T1. Using these nuclear data libraries, VESTA 2.1 calculates the assembly decay heat for almost all cases within 4% of the measured decay heat. On average, the ENDF/B-VII.0 calculated decay heat values appear to give a systematic underestimation of only 0.5%. When using the JEFF-3.1 library, this results a systematic underestimation of about 2%. By switching to the JEFF-3.2 library, this systematic underestimation is improved slighty (up to 1.5%). The changes made in the JEFF-3.3T1 beta library appear to be overcorrecting, as the systematic underestimation is transformed into a systematic overestimation of about 1.5%.
Shea, Beverley J; Grimshaw, Jeremy M; Wells, George A; Boers, Maarten; Andersson, Neil; Hamel, Candyce; Porter, Ashley C; Tugwell, Peter; Moher, David; Bouter, Lex M
2007-02-15
Our objective was to develop an instrument to assess the methodological quality of systematic reviews, building upon previous tools, empirical evidence and expert consensus. A 37-item assessment tool was formed by combining 1) the enhanced Overview Quality Assessment Questionnaire (OQAQ), 2) a checklist created by Sacks, and 3) three additional items recently judged to be of methodological importance. This tool was applied to 99 paper-based and 52 electronic systematic reviews. Exploratory factor analysis was used to identify underlying components. The results were considered by methodological experts using a nominal group technique aimed at item reduction and design of an assessment tool with face and content validity. The factor analysis identified 11 components. From each component, one item was selected by the nominal group. The resulting instrument was judged to have face and content validity. A measurement tool for the 'assessment of multiple systematic reviews' (AMSTAR) was developed. The tool consists of 11 items and has good face and content validity for measuring the methodological quality of systematic reviews. Additional studies are needed with a focus on the reproducibility and construct validity of AMSTAR, before strong recommendations can be made on its use.
Validation of model predictions of pore-scale fluid distributions during two-phase flow
NASA Astrophysics Data System (ADS)
Bultreys, Tom; Lin, Qingyang; Gao, Ying; Raeini, Ali Q.; AlRatrout, Ahmed; Bijeljic, Branko; Blunt, Martin J.
2018-05-01
Pore-scale two-phase flow modeling is an important technology to study a rock's relative permeability behavior. To investigate if these models are predictive, the calculated pore-scale fluid distributions which determine the relative permeability need to be validated. In this work, we introduce a methodology to quantitatively compare models to experimental fluid distributions in flow experiments visualized with microcomputed tomography. First, we analyzed five repeated drainage-imbibition experiments on a single sample. In these experiments, the exact fluid distributions were not fully repeatable on a pore-by-pore basis, while the global properties of the fluid distribution were. Then two fractional flow experiments were used to validate a quasistatic pore network model. The model correctly predicted the fluid present in more than 75% of pores and throats in drainage and imbibition. To quantify what this means for the relevant global properties of the fluid distribution, we compare the main flow paths and the connectivity across the different pore sizes in the modeled and experimental fluid distributions. These essential topology characteristics matched well for drainage simulations, but not for imbibition. This suggests that the pore-filling rules in the network model we used need to be improved to make reliable predictions of imbibition. The presented analysis illustrates the potential of our methodology to systematically and robustly test two-phase flow models to aid in model development and calibration.
Evaluating information skills training in health libraries: a systematic review.
Brettle, Alison
2007-12-01
Systematic reviews have shown that there is limited evidence to demonstrate that the information literacy training health librarians provide is effective in improving clinicians' information skills or has an impact on patient care. Studies lack measures which demonstrate validity and reliability in evaluating the impact of training. To determine what measures have been used; the extent to which they are valid and reliable; to provide guidance for health librarians who wish to evaluate the impact of their information skills training. Systematic review methodology involved searching seven databases, and personal files. Studies were included if they were about information skills training, used an objective measure to assess outcomes, and occurred in a health setting. Fifty-four studies were included in the review. Most outcome measures used in the studies were not tested for the key criteria of validity and reliability. Three tested for validity and reliability are described in more detail. Selecting an appropriate measure to evaluate the impact of training is a key factor in carrying out any evaluation. This systematic review provides guidance to health librarians by highlighting measures used in various circumstances, and those that demonstrate validity and reliability.
Interaction of Theory and Practice to Assess External Validity.
Leviton, Laura C; Trujillo, Mathew D
2016-01-18
Variations in local context bedevil the assessment of external validity: the ability to generalize about effects of treatments. For evaluation, the challenges of assessing external validity are intimately tied to the translation and spread of evidence-based interventions. This makes external validity a question for decision makers, who need to determine whether to endorse, fund, or adopt interventions that were found to be effective and how to ensure high quality once they spread. To present the rationale for using theory to assess external validity and the value of more systematic interaction of theory and practice. We review advances in external validity, program theory, practitioner expertise, and local adaptation. Examples are provided for program theory, its adaptation to diverse contexts, and generalizing to contexts that have not yet been studied. The often critical role of practitioner experience is illustrated in these examples. Work is described that the Robert Wood Johnson Foundation is supporting to study treatment variation and context more systematically. Researchers and developers generally see a limited range of contexts in which the intervention is implemented. Individual practitioners see a different and often a wider range of contexts, albeit not a systematic sample. Organized and taken together, however, practitioner experiences can inform external validity by challenging the developers and researchers to consider a wider range of contexts. Researchers have developed a variety of ways to adapt interventions in light of such challenges. In systematic programs of inquiry, as opposed to individual studies, the problems of context can be better addressed. Evaluators have advocated an interaction of theory and practice for many years, but the process can be made more systematic and useful. Systematic interaction can set priorities for assessment of external validity by examining the prevalence and importance of context features and treatment variations. Practitioner interaction with researchers and developers can assist in sharpening program theory, reducing uncertainty about treatment variations that are consistent or inconsistent with the theory, inductively ruling out the ones that are harmful or irrelevant, and helping set priorities for more rigorous study of context and treatment variation. © The Author(s) 2016.
NASA Astrophysics Data System (ADS)
Chen, Zhangqi; Liu, Zi-Kui; Zhao, Ji-Cheng
2018-05-01
Diffusion coefficients of seven binary systems (Ti-Mo, Ti-Nb, Ti-Ta, Ti-Zr, Zr-Mo, Zr-Nb, and Zr-Ta) at 1200 °C, 1000 °C, and 800 °C were experimentally determined using three Ti-Mo-Nb-Ta-Zr diffusion multiples. Electron probe microanalysis (EPMA) was performed to collect concentration profiles at the binary diffusion regions. Forward simulation analysis (FSA) was then applied to extract both impurity and interdiffusion coefficients in Ti-rich and Zr-rich part of the bcc phase. Excellent agreements between our results and most of the literature data validate the high-throughput approach combining FSA with diffusion multiples to obtain a large amount of systematic diffusion data, which will help establish the diffusion (mobility) databases for the design and development of biomedical and structural Ti alloys.
NASA Astrophysics Data System (ADS)
Chen, Zhangqi; Liu, Zi-Kui; Zhao, Ji-Cheng
2018-07-01
Diffusion coefficients of seven binary systems (Ti-Mo, Ti-Nb, Ti-Ta, Ti-Zr, Zr-Mo, Zr-Nb, and Zr-Ta) at 1200 °C, 1000 °C, and 800 °C were experimentally determined using three Ti-Mo-Nb-Ta-Zr diffusion multiples. Electron probe microanalysis (EPMA) was performed to collect concentration profiles at the binary diffusion regions. Forward simulation analysis (FSA) was then applied to extract both impurity and interdiffusion coefficients in Ti-rich and Zr-rich part of the bcc phase. Excellent agreements between our results and most of the literature data validate the high-throughput approach combining FSA with diffusion multiples to obtain a large amount of systematic diffusion data, which will help establish the diffusion (mobility) databases for the design and development of biomedical and structural Ti alloys.
THz-waves channeling in a monolithic saddle-coil for Dynamic Nuclear Polarization enhanced NMR.
Macor, A; de Rijk, E; Annino, G; Alberti, S; Ansermet, J-Ph
2011-10-01
A saddle coil manufactured by electric discharge machining (EDM) from a solid piece of copper has recently been realized at EPFL for Dynamic Nuclear Polarization enhanced Nuclear Magnetic Resonance experiments (DNP-NMR) at 9.4 T. The corresponding electromagnetic behavior of radio-frequency (400 MHz) and THz (263 GHz) waves were studied by numerical simulation in various measurement configurations. Moreover, we present an experimental method by which the results of the THz-wave numerical modeling are validated. On the basis of the good agreement between numerical and experimental results, we conducted by numerical simulation a systematic analysis on the influence of the coil geometry and of the sample properties on the THz-wave field, which is crucial in view of the optimization of DNP-NMR in solids. Copyright © 2011 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Nolan, Meaghan M.; Beran, Tanya; Hecker, Kent G.
2012-01-01
Students with positive attitudes toward statistics are likely to show strong academic performance in statistics courses. Multiple surveys measuring students' attitudes toward statistics exist; however, a comparison of the validity and reliability of interpretations based on their scores is needed. A systematic review of relevant electronic…
[Finite Element Modelling of the Eye for the Investigation of Accommodation].
Martin, H; Stachs, O; Guthoff, R; Grabow, N
2016-12-01
Background: Accommodation research increasingly uses engineering methods. This article presents the use of the finite element method in accommodation research. Material and Methods: Geometry, material data and boundary conditions are prerequisites for the application of the finite element method. Published data on geometry and materials are reviewed. It is shown how boundary conditions are important and how they influence the results. Results: Two dimensional and three dimensional models of the anterior chamber of the eye are presented. With simple two dimensional models, it is shown that realistic results for the accommodation amplitude can always be achieved. More complex three dimensional models of the accommodation mechanism - including the ciliary muscle - require further investigations of the material data and of the morphology of the ciliary muscle, if they are to achieve realistic results for accommodation. Discussion and Conclusion: The efficiency and the limitations of the finite element method are especially clear for accommodation. Application of the method requires extensive preparation, including acquisition of geometric and material data and experimental validation. However, a validated model can be used as a basis for parametric studies, by systematically varying material data and geometric dimensions. This allows systematic investigation of how essential input parameters influence the results. Georg Thieme Verlag KG Stuttgart · New York.
Breslin, Gavin; Shannon, Stephen; Haughey, Tandy; Donnelly, Paul; Leavey, Gerard
2017-08-31
The aim of the current study was to conduct a systematic review determining the effect of sport-specific mental health awareness programs to improve mental health knowledge and help-seeking among sports coaches, athletes and officials. The second aim was to review the study quality and to report on the validity of measures that were used to determine the effectiveness of programs. Sport-specific mental health awareness programs adopting an experimental or quasi-experimental design were included for synthesis. Six electronic databases were searched: PsycINFO, MEDLINE (OVID interface), Scopus, Cochrane, CINAHL and SPORTDiscus. Each database was searched from its year of inception to October 2016. Risk of bias was assessed using the Cochrane and QATSQ tools. Ten studies were included from the 1216 studies retrieved: four comprising coaches or service providers, one with officials, four with athletes, and one involved a combination of coaches and athletes. A range of outcomes was used to assess indices of mental health awareness and well-being. Mental health referral efficacy was improved in six studies, while three reported an increase in knowledge about mental health disorders. However, seven studies did not report effect sizes for their outcomes, limiting clinically meaningful interpretations. Furthermore, there was substantial heterogeneity and limited validity in the outcome measures of mental health knowledge and referral efficacy. Seven studies demonstrated a high risk of bias. Further, well-designed controlled intervention studies are required. Researchers, practitioners and policy makers should adhere to available methodological guidance and apply the psychological theory of behaviour change when developing and evaluating complex interventions. PROSPERO CRD42016040178.
Furedy, John J
2003-11-01
The differential/experimental distinction that Cronbach specified is important because any adequate account of psychological phenomena requires the recognition of the validity of both approaches, and a meaningful melding of the two. This paper suggests that Pavlov's work in psychology, based on earlier traditions of inquiry that can be traced back to the pre-Socratics, provides a potential way of achieving this melding, although such features as systematic rather than anecdotal methods of observation need to be added. Pavlov's methodological behaviorist approach is contrasted with metaphysical behaviorism (as exemplified explicitly in Watson and Skinner, and implicitly in the computer-metaphorical, information-processing explanations employed by current "cognitive" psychology). A common feature of the metaphysical approach is that individual-differences variables like sex are essentially ignored, or relegated to ideological categories such as the treatment of sex as merely a "social construction." Examples of research both before and after the "cognitive revolution" are presented where experimental and differential methods are melded, and individual differences are treated as phenomena worthy of investigation rather than as nuisance factors that merely add to experimental error.
Stochastic Time Models of Syllable Structure
Shaw, Jason A.; Gafos, Adamantios I.
2015-01-01
Drawing on phonology research within the generative linguistics tradition, stochastic methods, and notions from complex systems, we develop a modelling paradigm linking phonological structure, expressed in terms of syllables, to speech movement data acquired with 3D electromagnetic articulography and X-ray microbeam methods. The essential variable in the models is syllable structure. When mapped to discrete coordination topologies, syllabic organization imposes systematic patterns of variability on the temporal dynamics of speech articulation. We simulated these dynamics under different syllabic parses and evaluated simulations against experimental data from Arabic and English, two languages claimed to parse similar strings of segments into different syllabic structures. Model simulations replicated several key experimental results, including the fallibility of past phonetic heuristics for syllable structure, and exposed the range of conditions under which such heuristics remain valid. More importantly, the modelling approach consistently diagnosed syllable structure proving resilient to multiple sources of variability in experimental data including measurement variability, speaker variability, and contextual variability. Prospects for extensions of our modelling paradigm to acoustic data are also discussed. PMID:25996153
Morphology of viscoplastic drop impact on viscoplastic surfaces.
Chen, Simeng; Bertola, Volfango
2017-01-25
The impact of viscoplastic drops onto viscoplastic substrates characterized by different magnitudes of the yield stress is investigated experimentally. The interaction between viscoplastic drops and surfaces has an important application in additive manufacturing, where a fresh layer of material is deposited on a partially cured or dried layer of the same material. So far, no systematic studies on this subject have been reported in literature. The impact morphology of different drop/substrate combinations, with yield stresses ranging from 1.13 Pa to 11.7 Pa, was studied by high speed imaging for impact Weber numbers between 15 and 85. Experimental data were compared with one of the existing models for Newtonian drop impact onto liquid surfaces. Results show the magnitude of the yield stress of drop/substrate strongly affects the final shape of the impacting drop, permanently deformed at the end of impact. The comparison between experimental data and model predictions suggests the crater evolution model is only valid when predicting the evolution of the crater at sufficiently high Weber numbers.
Helium release during shale deformation: Experimental validation
Bauer, Stephen J.; Gardner, W. Payton; Heath, Jason E.
2016-07-01
This paper describes initial experimental results of helium tracer release monitoring during deformation of shale. Naturally occurring radiogenic 4He is present in high concentration in most shales. During rock deformation, accumulated helium could be released as fractures are created and new transport pathways are created. We present the results of an experimental study in which confined reservoir shale samples, cored parallel and perpendicular to bedding, which were initially saturated with helium to simulate reservoir conditions, are subjected to triaxial compressive deformation. During the deformation experiment, differential stress, axial, and radial strains are systematically tracked. Release of helium is dynamically measuredmore » using a helium mass spectrometer leak detector. Helium released during deformation is observable at the laboratory scale and the release is tightly coupled to the shale deformation. These first measurements of dynamic helium release from rocks undergoing deformation show that helium provides information on the evolution of microstructure as a function of changes in stress and strain.« less
Construction and validation of forms: systematization of the care of people under hemodialysis.
Arreguy-Sena, Cristina; Marques, Tais de Oliveira; Souza, Luciene Carnevale de; Alvarenga-Martins, Nathália; Krempser, Paula; Braga, Luciene Muniz; Parreira, Pedro Miguel Dos Santos Dinis
2018-01-01
create and validate forms to subsidize the systematization of nursing care with people on hemodialysis. institutional case study to support the systematization of assistance from the construction of forms for data collection, diagnoses, interventions and nursing results, using cross-mapping, Risner's reasoning, Neuman's theory, taxonomies of diagnoses, interventions and nursing results with application in clinical practice and validation by focal group with specialist nurses. 18 people on hemodialysis and 7 nurses participated. Consensus content of form matter with specialist nurses in the area (Crombach 0.86). The papers captured 43 diagnoses, 26 interventions and 78 nursing results depicting human responses in their singularities. the validated forms fill a gap by enabling the capture of human responses from people on hemodialysis and by subsidizing the planning of nursing care on a scientific basis.
Systematic Review of Measures Used in Pictorial Cigarette Pack Warning Experiments.
Francis, Diane B; Hall, Marissa G; Noar, Seth M; Ribisl, Kurt M; Brewer, Noel T
2017-10-01
We sought to describe characteristics and psychometric properties of measures used in pictorial cigarette pack warning experiments and provide recommendations for future studies. Our systematic review identified 68 pictorial cigarette pack warning experiments conducted between 2000 and 2016 in 22 countries. Two independent coders coded all studies on study features, including sample characteristics, theoretical framework, and constructs assessed. We also coded measurement characteristics, including construct, number of items, source, reliability, and validity. We identified 278 measures representing 61 constructs. The most commonly assessed construct categories were warning reactions (62% of studies) and perceived effectiveness (60%). The most commonly used outcomes were affective reactions (35%), perceived likelihood of harm (22%), intention to quit smoking (22%), perceptions that warnings motivate people to quit smoking (18%), and credibility (16%). Only 4 studies assessed smoking behavior. More than half (54%) of all measures were single items. For multi-item measures, studies reported reliability data 68% of the time (mean α = 0.88, range α = 0.68-0.98). Studies reported sources of measures only 33% of the time and rarely reported validity data. Of 68 studies, 37 (54%) mentioned a theory as informing the study. Our review found great variability in constructs and measures used to evaluate the impact of cigarette pack pictorial warnings. Many measures were single items with unknown psychometric properties. Recommendations for future studies include a greater emphasis on theoretical models that inform measurement, use of reliable and validated (preferably multi-item) measures, and better reporting of measure sources. Robust and consistent measurement is important for building a strong, cumulative evidence base to support pictorial cigarette pack warning policies. This systematic review of experimental studies of pictorial cigarette warnings demonstrates the need for standardized, theory-based measures. © The Author 2017. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Simon, S. B.; Papike, J. J.; Horz, F.; See, T. H.
1985-01-01
The results of an experiment designed to test the validity of the model for agglutinate formation involving fusion of the finest fraction or F3 are reported. Impact glasses were formed from various mixes of orthoclase and albite powders, which were used as analogs for soils with chemically constrasting coarse and fine fractions. The results showed that the single most important factor displacing the composition of a small-scale impact melt from the bulk composition of the source regolith is the fractionated composition of the finest soil fraction. Volatile loss and the amount of melting, which in turn are determined by the degree of shock, are also important. As predicted by the model, the lower pressure melts are the most fractionated, and higher pressure is accompanied by increased melting causing glass compositions to approach the bulk. In general, the systematics predicted by the model are observed; the model appears to be valid.
Tactical Defenses Against Systematic Variation in Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2002-01-01
This paper examines the role of unexplained systematic variation on the reproducibility of wind tunnel test results. Sample means and variances estimated in the presence of systematic variations are shown to be susceptible to bias errors that are generally non-reproducible functions of those variations. Unless certain precautions are taken to defend against the effects of systematic variation, it is shown that experimental results can be difficult to duplicate and of dubious value for predicting system response with the highest precision or accuracy that could otherwise be achieved. Results are reported from an experiment designed to estimate how frequently systematic variations are in play in a representative wind tunnel experiment. These results suggest that significant systematic variation occurs frequently enough to cast doubts on the common assumption that sample observations can be reliably assumed to be independent. The consequences of ignoring correlation among observations induced by systematic variation are considered in some detail. Experimental tactics are described that defend against systematic variation. The effectiveness of these tactics is illustrated through computational experiments and real wind tunnel experimental results. Some tutorial information describes how to analyze experimental results that have been obtained using such quality assurance tactics.
Huebner, David M; Perry, Nicholas S
2015-10-01
Behavioral interventions to reduce sexual risk behavior depend on strong health behavior theory. By identifying the psychosocial variables that lead causally to sexual risk, theories provide interventionists with a guide for how to change behavior. However, empirical research is critical to determining whether a particular theory adequately explains sexual risk behavior. A large body of cross-sectional evidence, which has been reviewed elsewhere, supports the notion that certain theory-based constructs (e.g., self-efficacy) are correlates of sexual behavior. However, given the limitations of inferring causality from correlational research, it is essential that we review the evidence from more methodologically rigorous studies (i.e., longitudinal and experimental designs). This systematic review identified 44 longitudinal studies in which investigators attempted to predict sexual risk from psychosocial variables over time. We also found 134 experimental studies (i.e., randomized controlled trials of HIV interventions), but of these only 9 (6.7 %) report the results of mediation analyses that might provide evidence for the validity of health behavior theories in predicting sexual behavior. Results show little convergent support across both types of studies for most traditional, theoretical predictors of sexual behavior. This suggests that the field must expand the body of empirical work that utilizes the most rigorous study designs to test our theoretical assumptions. The inconsistent results of existing research would indicate that current theoretical models of sexual risk behavior are inadequate, and may require expansion or adaptation.
HoPaCI-DB: host-Pseudomonas and Coxiella interaction database
Bleves, Sophie; Dunger, Irmtraud; Walter, Mathias C.; Frangoulidis, Dimitrios; Kastenmüller, Gabi; Voulhoux, Romé; Ruepp, Andreas
2014-01-01
Bacterial infectious diseases are the result of multifactorial processes affected by the interplay between virulence factors and host targets. The host-Pseudomonas and Coxiella interaction database (HoPaCI-DB) is a publicly available manually curated integrative database (http://mips.helmholtz-muenchen.de/HoPaCI/) of host–pathogen interaction data from Pseudomonas aeruginosa and Coxiella burnetii. The resource provides structured information on 3585 experimentally validated interactions between molecules, bioprocesses and cellular structures extracted from the scientific literature. Systematic annotation and interactive graphical representation of disease networks make HoPaCI-DB a versatile knowledge base for biologists and network biology approaches. PMID:24137008
The augmentation algorithm and molecular phylogenetic trees
NASA Technical Reports Server (NTRS)
Holmquist, R.
1978-01-01
Moore's (1977) augmentation procedure is discussed, and it is concluded that the procedure is valid for obtaining estimates of the total number of fixed nucleotide substitutions both theoretically and in practice, for both simulated and real data, and in agreement, for experimentally dense data sets, with stochastic estimates of the divergence, provided the restrictions on codon mutability resulting from natural selection are explicitly allowed for. Tateno and Nei's (1978) critique that the augmentation procedure has a systematic bias toward overestimation of the total number of nucleotide replacements is disputed, and a data analysis suggests that ancestral sequences inferred by the method of parsimony contain a large number of incorrectly assigned nucleotides.
4D pressure MRI: validation through in-vitro experiments and simulations
NASA Astrophysics Data System (ADS)
Schiavazzi, Daniele; Amili, Omid; Coletti, Filippo
2017-11-01
Advances in MRI scan technology and recently developed acquisition sequences have led to the development of 4D flow MRI, a protocol capable of characterizing in-vivo hemodynamics in patients. Thus, the availability of phase-averaged time-resolved three-dimensional blood velocities has opened new opportunities for computing a wide spectrum of totally non-invasive hemodynamic indicators. In this regard, relative pressures play a particularly important role, as they are routinely employed in the clinic to detect cardiovascular abnormalities (e.g., in peripheral artery disease, valve stenosis, hypertension, etc.). In the first part of the talk, we discuss how the relative pressures can be robustly computed through the solution of a pressure Poisson equation and how noise in the velocities affects their estimate. Routine application of these techniques in the clinic, require however a thorough validation on multiple patients/anatomies and systematic comparisons with in-vitro and simulated representations. Thus, the second part of the talk illustrates the use of numerical simulation and in-vitro experimental protocols to validate these indicators with reference to aortic and cerebral vascular anatomies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomar, Vikas
2017-03-06
DoE-NETL partnered with Purdue University to predict the creep and associated microstructure evolution of tungsten-based refractory alloys. Researchers use grain boundary (GB) diagrams, a new concept, to establish time-dependent creep resistance and associated microstructure evolution of grain boundaries/intergranular films GB/IGF controlled creep as a function of load, environment, and temperature. The goal was to conduct a systematic study that includes the development of a theoretical framework, multiscale modeling, and experimental validation using W-based body-centered-cubic alloys, doped/alloyed with one or two of the following elements: nickel, palladium, cobalt, iron, and copper—typical refractory alloys. Prior work has already established and validated amore » basic theory for W-based binary and ternary alloys; the study conducted under this project extended this proven work. Based on interface diagrams phase field models were developed to predict long term microstructural evolution. In order to validate the models nanoindentation creep data was used to elucidate the role played by the interface properties in predicting long term creep strength and microstructure evolution.« less
NASA Technical Reports Server (NTRS)
Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris
2011-01-01
A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.
Leavy, Justine E; Bull, Fiona C; Rosenberg, Michael; Bauman, Adrian
2011-12-01
Internationally, mass media campaigns to promote regular moderate-intensity physical activity have increased recently. Evidence of mass media campaign effectiveness exists in other health areas, however the evidence for physical activity is limited. The purpose was to systematically review the literature on physical activity mass media campaigns, 2003-2010. A focus was on reviewing evaluation designs, theory used, formative evaluation, campaign effects and outcomes. Literature was searched resulting in 18 individual adult mass media campaigns, mostly in high-income regions and two in middle-income regions. Designs included: quasi experimental (n = 5); non experimental (n = 12); a mixed methods design (n = 1). One half used formative research. Awareness levels ranged from 17 to 95%. Seven campaigns reported significant increases in physical activity levels. The review found that beyond awareness raising, changes in other outcomes were measured, assessed but reported in varying ways. It highlighted improvements in evaluation, although limited evidence of campaign effects remain. It provides an update on the evaluation methodologies used in the adult literature. We recommend optimal evaluation design should include: (1) formative research to inform theories/frameworks, campaign content and evaluation design; (2) cohort study design with multiple data collection points; (3) sufficient duration; (4) use of validated measures; (5) sufficient evaluation resources.
ERIC Educational Resources Information Center
Wetzel, Angela Payne
2011-01-01
Previous systematic reviews indicate a lack of reporting of reliability and validity evidence in subsets of the medical education literature. Psychology and general education reviews of factor analysis also indicate gaps between current and best practices; yet, a comprehensive review of exploratory factor analysis in instrument development across…
Lalu, Manoj M; Sullivan, Katrina J; Mei, Shirley HJ; Moher, David; Straus, Alexander; Fergusson, Dean A; Stewart, Duncan J; Jazi, Mazen; MacLeod, Malcolm; Winston, Brent; Marshall, John; Hutton, Brian; Walley, Keith R; McIntyre, Lauralyn
2016-01-01
Evaluation of preclinical evidence prior to initiating early-phase clinical studies has typically been performed by selecting individual studies in a non-systematic process that may introduce bias. Thus, in preparation for a first-in-human trial of mesenchymal stromal cells (MSCs) for septic shock, we applied systematic review methodology to evaluate all published preclinical evidence. We identified 20 controlled comparison experiments (980 animals from 18 publications) of in vivo sepsis models. Meta-analysis demonstrated that MSC treatment of preclinical sepsis significantly reduced mortality over a range of experimental conditions (odds ratio 0.27, 95% confidence interval 0.18–0.40, latest timepoint reported for each study). Risk of bias was unclear as few studies described elements such as randomization and no studies included an appropriately calculated sample size. Moreover, the presence of publication bias resulted in a ~30% overestimate of effect and threats to validity limit the strength of our conclusions. This novel prospective application of systematic review methodology serves as a template to evaluate preclinical evidence prior to initiating first-in-human clinical studies. DOI: http://dx.doi.org/10.7554/eLife.17850.001 PMID:27870924
Mani, Suresh; Sharma, Shobha; Omar, Baharudin; Paungmali, Aatit; Joseph, Leonard
2017-04-01
Purpose The purpose of this review is to systematically explore and summarise the validity and reliability of telerehabilitation (TR)-based physiotherapy assessment for musculoskeletal disorders. Method A comprehensive systematic literature review was conducted using a number of electronic databases: PubMed, EMBASE, PsycINFO, Cochrane Library and CINAHL, published between January 2000 and May 2015. The studies examined the validity, inter- and intra-rater reliabilities of TR-based physiotherapy assessment for musculoskeletal conditions were included. Two independent reviewers used the Quality Appraisal Tool for studies of diagnostic Reliability (QAREL) and the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool to assess the methodological quality of reliability and validity studies respectively. Results A total of 898 hits were achieved, of which 11 articles based on inclusion criteria were reviewed. Nine studies explored the concurrent validity, inter- and intra-rater reliabilities, while two studies examined only the concurrent validity. Reviewed studies were moderate to good in methodological quality. The physiotherapy assessments such as pain, swelling, range of motion, muscle strength, balance, gait and functional assessment demonstrated good concurrent validity. However, the reported concurrent validity of lumbar spine posture, special orthopaedic tests, neurodynamic tests and scar assessments ranged from low to moderate. Conclusion TR-based physiotherapy assessment was technically feasible with overall good concurrent validity and excellent reliability, except for lumbar spine posture, orthopaedic special tests, neurodynamic testa and scar assessment.
Zhang, Zhongheng; Ni, Hongying; Xu, Xiao
2014-08-01
Propensity score (PS) analysis has been increasingly used in critical care medicine; however, its validation has not been systematically investigated. The present study aimed to compare effect sizes in PS-based observational studies vs. randomized controlled trials (RCTs) (or meta-analysis of RCTs). Critical care observational studies using PS were systematically searched in PubMed from inception to April 2013. Identified PS-based studies were matched to one or more RCTs in terms of population, intervention, comparison, and outcome. The effect sizes of experimental treatments were compared for PS-based studies vs. RCTs (or meta-analysis of RCTs) with sign test. Furthermore, ratio of odds ratio (ROR) was calculated from the interaction term of treatment × study type in a logistic regression model. A ROR < 1 indicates greater benefit for experimental treatment in RCTs compared with PS-based studies. RORs of each comparison were pooled by using meta-analytic approach with random-effects model. A total of 20 PS-based studies were identified and matched to RCTs. Twelve of the 20 comparisons showed greater beneficial effect for experimental treatment in RCTs than that in PS-based studies (sign test P = 0.503). The difference was statistically significant in four comparisons. ROR can be calculated from 13 comparisons, of which four showed significantly greater beneficial effect for experimental treatment in RCTs. The pooled ROR was 0.71 (95% CI: 0.63, 0.79; P = 0.002), suggesting that RCTs (or meta-analysis of RCTs) were more likely to report beneficial effect for the experimental treatment than PS-based studies. The result remained unchanged in sensitivity analysis and meta-regression. In critical care literature, PS-based observational study is likely to report less beneficial effect of experimental treatment compared with RCTs (or meta-analysis of RCTs). Copyright © 2014 Elsevier Inc. All rights reserved.
Torsion sensing based on patterned piezoelectric beams
NASA Astrophysics Data System (ADS)
Cha, Youngsu; You, Hangil
2018-03-01
In this study, we investigated the sensing characteristics of piezoelectric beams under torsional loads. We used partially patterned piezoelectric beams to sense torsion. In particular, the piezoelectric patches are located symmetrically with respect to the line of the shear center of the beam. The patterned piezoelectric beam is modeled as a slender beam, and its electrical responses are obtained by piezoelectric electromechanical equations. To validate the modeling framework, experiments are performed using a setup that forces pure torsional deformation. Three different geometric configurations of the patterned piezoelectric layer are used for the experiments. The frequency and amplitude of the forced torsional load are systematically varied in order to study the behavior of the piezoelectric sensor. Experimental results demonstrate that two voltage outputs of the piezoelectric beam are approximately out of phase with identical amplitude. Moreover, the length of the piezoelectric layers has a significant influence on the sensing properties. Our theoretical predictions using the model support the experimental findings.
Nanomechanical effects of light unveil photons momentum in medium
Verma, Gopal; Chaudhary, Komal; Singh, Kamal P.
2017-01-01
Precision measurement on momentum transfer between light and fluid interface has many implications including resolving the intriguing nature of photons momentum in a medium. For example, the existence of Abraham pressure of light under specific experimental configuration and the predictions of Chau-Amperian formalism of optical momentum for TE and TM polarizations remain untested. Here, we quantitatively and cleanly measure nanomehanical dynamics of water surface excited by radiation pressure of a laser beam. We systematically scanned wide range of experimental parameters including long exposure times, angle of incidence, spot size and laser polarization, and used two independent pump-probe techniques to validate a nano- bump on the water surface under all the tested conditions, in quantitative agreement with the Minkowski’s momentum of light. With careful experiments, we demonstrate advantages and limitations of nanometer resolved optical probing techniques and narrow down actual manifestation of optical momentum in a medium. PMID:28198468
Tunneling spectroscopy of Al/AlO{sub x}/Pb subjected to hydrostatic pressure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Jun; Hou, Xing-Yuan; Guan, Tong
2015-05-18
We develop an experimental tool to investigate high-pressure electronic density of state by combining electron tunneling spectroscopy measurements with high-pressure technique. It is demonstrated that tunneling spectroscopy measurement on Al/AlO{sub x}/Pb junction is systematically subjected to hydrostatic pressure up to 2.2 GPa. Under such high pressure, the normal state junction resistance is sensitive to the applied pressure, reflecting the variation of band structure of the barrier material upon pressures. In superconducting state, the pressure dependence of the energy gap Δ{sub 0}, the gap ratio 2Δ{sub 0}/k{sub B}T{sub c}, and the phonon spectral energy is extracted and compared with those obtained inmore » the limited pressure range. Our experimental results show the accessibility and validity of high pressure tunneling spectroscopy, offering wealthy information about high pressure superconductivity.« less
Effect of an environmental science curriculum on students' leisure time activities
NASA Astrophysics Data System (ADS)
Blum, Abraham
Cooley and Reed's active interest measurement approach was combined with Guttman's Facet Design to construct a systematic instrument for the assessment of the impact of an environmental science course on students' behavior outside school. A quasimatched design of teacher allocation to the experimental and control groups according to their preferred teaching style was used. A kind of dummy control curriculum was devised to enable valid comparative evaluation of a new course which differs from the traditional one in both content and goal. This made it possible to control most of the differing factors inherent in the old and new curriculum. The research instrument was given to 1000 students who were taught by 28 teachers. Students who learned according to the experimental curriculum increased their leisure time activities related to the environmental science curriculum significantly. There were no significant differences between boys and girls and between students with different achievement levels.
Preparations for Global Precipitation Measurement(GPM)Ground Validation
NASA Technical Reports Server (NTRS)
Bidwell, S. W.; Bibyk, I. K.; Duming, J. F.; Everett, D. F.; Smith, E. A.; Wolff, D. B.
2004-01-01
The Global Precipitation Measurement (GPM) program is an international partnership led by the National Aeronautics and Space Administration (NASA) and the Japan Aerospace Exploration Agency (JAXA). GPM will improve climate, weather, and hydro-meterorological forecasts through more frequent and more accurate measurement of precipitation across the globe. This paper describes the concept and the preparations for Ground Validation within the GPM program. Ground Validation (GV) plays a critical role in the program by investigating and quantitatively assessing the errors within the satellite retrievals. These quantitative estimates of retrieval errors will assist the scientific community by bounding the errors within their research products. The two fundamental requirements of the GPM Ground Validation program are: (1) error characterization of the precipitation retrievals and (2) continual improvement of the satellite retrieval algorithms. These two driving requirements determine the measurements, instrumentation, and location for ground observations. This paper describes GV plans for estimating the systematic and random components of retrieval error and for characterizing the spatial and temporal structure of the error. This paper describes the GPM program for algorithm improvement in which error models are developed and experimentally explored to uncover the physical causes of errors within the retrievals. GPM will ensure that information gained through Ground Validation is applied to future improvements in the spaceborne retrieval algorithms. This paper discusses the potential locations for validation measurement and research, the anticipated contributions of GPM's international partners, and the interaction of Ground Validation with other GPM program elements.
Menon, Rajasree; Wen, Yuchen; Omenn, Gilbert S.; Kretzler, Matthias; Guan, Yuanfang
2013-01-01
Integrating large-scale functional genomic data has significantly accelerated our understanding of gene functions. However, no algorithm has been developed to differentiate functions for isoforms of the same gene using high-throughput genomic data. This is because standard supervised learning requires ‘ground-truth’ functional annotations, which are lacking at the isoform level. To address this challenge, we developed a generic framework that interrogates public RNA-seq data at the transcript level to differentiate functions for alternatively spliced isoforms. For a specific function, our algorithm identifies the ‘responsible’ isoform(s) of a gene and generates classifying models at the isoform level instead of at the gene level. Through cross-validation, we demonstrated that our algorithm is effective in assigning functions to genes, especially the ones with multiple isoforms, and robust to gene expression levels and removal of homologous gene pairs. We identified genes in the mouse whose isoforms are predicted to have disparate functionalities and experimentally validated the ‘responsible’ isoforms using data from mammary tissue. With protein structure modeling and experimental evidence, we further validated the predicted isoform functional differences for the genes Cdkn2a and Anxa6. Our generic framework is the first to predict and differentiate functions for alternatively spliced isoforms, instead of genes, using genomic data. It is extendable to any base machine learner and other species with alternatively spliced isoforms, and shifts the current gene-centered function prediction to isoform-level predictions. PMID:24244129
An ALE meta-analysis on the audiovisual integration of speech signals.
Erickson, Laura C; Heeg, Elizabeth; Rauschecker, Josef P; Turkeltaub, Peter E
2014-11-01
The brain improves speech processing through the integration of audiovisual (AV) signals. Situations involving AV speech integration may be crudely dichotomized into those where auditory and visual inputs contain (1) equivalent, complementary signals (validating AV speech) or (2) inconsistent, different signals (conflicting AV speech). This simple framework may allow the systematic examination of broad commonalities and differences between AV neural processes engaged by various experimental paradigms frequently used to study AV speech integration. We conducted an activation likelihood estimation metaanalysis of 22 functional imaging studies comprising 33 experiments, 311 subjects, and 347 foci examining "conflicting" versus "validating" AV speech. Experimental paradigms included content congruency, timing synchrony, and perceptual measures, such as the McGurk effect or synchrony judgments, across AV speech stimulus types (sublexical to sentence). Colocalization of conflicting AV speech experiments revealed consistency across at least two contrast types (e.g., synchrony and congruency) in a network of dorsal stream regions in the frontal, parietal, and temporal lobes. There was consistency across all contrast types (synchrony, congruency, and percept) in the bilateral posterior superior/middle temporal cortex. Although fewer studies were available, validating AV speech experiments were localized to other regions, such as ventral stream visual areas in the occipital and inferior temporal cortex. These results suggest that while equivalent, complementary AV speech signals may evoke activity in regions related to the corroboration of sensory input, conflicting AV speech signals recruit widespread dorsal stream areas likely involved in the resolution of conflicting sensory signals. Copyright © 2014 Wiley Periodicals, Inc.
ASME B89.4.19 Performance Evaluation Tests and Geometric Misalignments in Laser Trackers
Muralikrishnan, B.; Sawyer, D.; Blackburn, C.; Phillips, S.; Borchardt, B.; Estler, W. T.
2009-01-01
Small and unintended offsets, tilts, and eccentricity of the mechanical and optical components in laser trackers introduce systematic errors in the measured spherical coordinates (angles and range readings) and possibly in the calculated lengths of reference artifacts. It is desirable that the tests described in the ASME B89.4.19 Standard [1] be sensitive to these geometric misalignments so that any resulting systematic errors are identified during performance evaluation. In this paper, we present some analysis, using error models and numerical simulation, of the sensitivity of the length measurement system tests and two-face system tests in the B89.4.19 Standard to misalignments in laser trackers. We highlight key attributes of the testing strategy adopted in the Standard and propose new length measurement system tests that demonstrate improved sensitivity to some misalignments. Experimental results with a tracker that is not properly error corrected for the effects of the misalignments validate claims regarding the proposed new length tests. PMID:27504211
First observation of rotational structures in Re 168
Hartley, D. J.; Janssens, R. V. F.; Riedinger, L. L.; ...
2016-11-30
We assigned first rotational sequences to the odd-odd nucleus 168Re. Coincidence relationships of these structures with rhenium x rays confirm the isotopic assignment, while arguments based on the γ-ray multiplicity (K-fold) distributions observed with the new bands lead to the mass assignment. Configurations for the two bands were determined through analysis of the rotational alignments of the structures and a comparison of the experimental B(M1)/B(E2) ratios with theory. Tentative spin assignments are proposed for the πh 11/2νi 13/2 band, based on energy level systematics for other known sequences in neighboring odd-odd rhenium nuclei, as well as on systematics seen formore » the signature inversion feature that is well known in this region. Furthermore, the spin assignment for the πh 11/2ν(h 9/2/f 7/2) structure provides additional validation of the proposed spins and configurations for isomers in the 176Au → 172Ir → 168Re α-decay chain.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartley, D. J.; Janssens, R. V. F.; Riedinger, L. L.
We assigned first rotational sequences to the odd-odd nucleus 168Re. Coincidence relationships of these structures with rhenium x rays confirm the isotopic assignment, while arguments based on the γ-ray multiplicity (K-fold) distributions observed with the new bands lead to the mass assignment. Configurations for the two bands were determined through analysis of the rotational alignments of the structures and a comparison of the experimental B(M1)/B(E2) ratios with theory. Tentative spin assignments are proposed for the πh 11/2νi 13/2 band, based on energy level systematics for other known sequences in neighboring odd-odd rhenium nuclei, as well as on systematics seen formore » the signature inversion feature that is well known in this region. Furthermore, the spin assignment for the πh 11/2ν(h 9/2/f 7/2) structure provides additional validation of the proposed spins and configurations for isomers in the 176Au → 172Ir → 168Re α-decay chain.« less
A framework for the damage evaluation of acoustic emission signals through Hilbert-Huang transform
NASA Astrophysics Data System (ADS)
Siracusano, Giulio; Lamonaca, Francesco; Tomasello, Riccardo; Garescì, Francesca; Corte, Aurelio La; Carnì, Domenico Luca; Carpentieri, Mario; Grimaldi, Domenico; Finocchio, Giovanni
2016-06-01
The acoustic emission (AE) is a powerful and potential nondestructive testing method for structural monitoring in civil engineering. Here, we show how systematic investigation of crack phenomena based on AE data can be significantly improved by the use of advanced signal processing techniques. Such data are a fundamental source of information that can be used as the basis for evaluating the status of the material, thereby paving the way for a new frontier of innovation made by data-enabled analytics. In this article, we propose a framework based on the Hilbert-Huang Transform for the evaluation of material damages that (i) facilitates the systematic employment of both established and promising analysis criteria, and (ii) provides unsupervised tools to achieve an accurate classification of the fracture type, the discrimination between longitudinal (P-) and traversal (S-) waves related to an AE event. The experimental validation shows promising results for a reliable assessment of the health status through the monitoring of civil infrastructures.
An Automatic Quality Control Pipeline for High-Throughput Screening Hit Identification.
Zhai, Yufeng; Chen, Kaisheng; Zhong, Yang; Zhou, Bin; Ainscow, Edward; Wu, Ying-Ta; Zhou, Yingyao
2016-09-01
The correction or removal of signal errors in high-throughput screening (HTS) data is critical to the identification of high-quality lead candidates. Although a number of strategies have been previously developed to correct systematic errors and to remove screening artifacts, they are not universally effective and still require fair amount of human intervention. We introduce a fully automated quality control (QC) pipeline that can correct generic interplate systematic errors and remove intraplate random artifacts. The new pipeline was first applied to ~100 large-scale historical HTS assays; in silico analysis showed auto-QC led to a noticeably stronger structure-activity relationship. The method was further tested in several independent HTS runs, where QC results were sampled for experimental validation. Significantly increased hit confirmation rates were obtained after the QC steps, confirming that the proposed method was effective in enriching true-positive hits. An implementation of the algorithm is available to the screening community. © 2016 Society for Laboratory Automation and Screening.
Akl, Elie A; Fadlallah, Racha; Ghandour, Lilian; Kdouh, Ola; Langlois, Etienne; Lavis, John N; Schünemann, Holger; El-Jardali, Fadi
2017-09-04
Groups or institutions funding or conducting systematic reviews in health policy and systems research (HPSR) should prioritise topics according to the needs of policymakers and stakeholders. The aim of this study was to develop and validate a tool to prioritise questions for systematic reviews in HPSR. We developed the tool following a four-step approach consisting of (1) the definition of the purpose and scope of tool, (2) item generation and reduction, (3) testing for content and face validity, (4) and pilot testing of the tool. The research team involved international experts in HPSR, systematic review methodology and tool development, led by the Center for Systematic Reviews on Health Policy and Systems Research (SPARK). We followed an inclusive approach in determining the final selection of items to allow customisation to the user's needs. The purpose of the SPARK tool was to prioritise questions in HPSR in order to address them in systematic reviews. In the item generation and reduction phase, an extensive literature search yielded 40 relevant articles, which were reviewed by the research team to create a preliminary list of 19 candidate items for inclusion in the tool. As part of testing for content and face validity, input from international experts led to the refining, changing, merging and addition of new items, and to organisation of the tool into two modules. Following pilot testing, we finalised the tool, with 22 items organised in two modules - the first module including 13 items to be rated by policymakers and stakeholders, and the second including 9 items to be rated by systematic review teams. Users can customise the tool to their needs, by omitting items that may not be applicable to their settings. We also developed a user manual that provides guidance on how to use the SPARK tool, along with signaling questions. We have developed and conducted initial validation of the SPARK tool to prioritise questions for systematic reviews in HPSR, along with a user manual. By aligning systematic review production to policy priorities, the tool will help support evidence-informed policymaking and reduce research waste. We invite others to contribute with additional real-life implementation of the tool.
Systematic reviews, systematic error and the acquisition of clinical knowledge
2010-01-01
Background Since its inception, evidence-based medicine and its application through systematic reviews, has been widely accepted. However, it has also been strongly criticised and resisted by some academic groups and clinicians. One of the main criticisms of evidence-based medicine is that it appears to claim to have unique access to absolute scientific truth and thus devalues and replaces other types of knowledge sources. Discussion The various types of clinical knowledge sources are categorised on the basis of Kant's categories of knowledge acquisition, as being either 'analytic' or 'synthetic'. It is shown that these categories do not act in opposition but rather, depend upon each other. The unity of analysis and synthesis in knowledge acquisition is demonstrated during the process of systematic reviewing of clinical trials. Systematic reviews constitute comprehensive synthesis of clinical knowledge but depend upon plausible, analytical hypothesis development for the trials reviewed. The dangers of systematic error regarding the internal validity of acquired knowledge are highlighted on the basis of empirical evidence. It has been shown that the systematic review process reduces systematic error, thus ensuring high internal validity. It is argued that this process does not exclude other types of knowledge sources. Instead, amongst these other types it functions as an integrated element during the acquisition of clinical knowledge. Conclusions The acquisition of clinical knowledge is based on interaction between analysis and synthesis. Systematic reviews provide the highest form of synthetic knowledge acquisition in terms of achieving internal validity of results. In that capacity it informs the analytic knowledge of the clinician but does not replace it. PMID:20537172
Iridology: A systematic review.
Ernst, E
1999-02-01
Iridologists claim to be able to diagnose medical conditions through abnormalities of pigmentation in the iris. This technique is popular in many countries. Therefore it is relevant to ask whether it is valid. To systematically review all interpretable tests of the validity of iridology as a diagnostic tool. DATA SOURCE AND EXTRACTION: Three independent literature searches were performed to identify all blinded tests. Data were extracted in a predefined, standardized fashion. Four case control studies were found. The majority of these investigations suggests that iridology is not a valid diagnostic method. The validity of iridology as a diagnostic tool is not supported by scientific evaluations. Patients and therapists should be discouraged from using this method.
Observations on CFD Verification and Validation from the AIAA Drag Prediction Workshops
NASA Technical Reports Server (NTRS)
Morrison, Joseph H.; Kleb, Bil; Vassberg, John C.
2014-01-01
The authors provide observations from the AIAA Drag Prediction Workshops that have spanned over a decade and from a recent validation experiment at NASA Langley. These workshops provide an assessment of the predictive capability of forces and moments, focused on drag, for transonic transports. It is very difficult to manage the consistency of results in a workshop setting to perform verification and validation at the scientific level, but it may be sufficient to assess it at the level of practice. Observations thus far: 1) due to simplifications in the workshop test cases, wind tunnel data are not necessarily the “correct” results that CFD should match, 2) an average of core CFD data are not necessarily a better estimate of the true solution as it is merely an average of other solutions and has many coupled sources of variation, 3) outlier solutions should be investigated and understood, and 4) the DPW series does not have the systematic build up and definition on both the computational and experimental side that is required for detailed verification and validation. Several observations regarding the importance of the grid, effects of physical modeling, benefits of open forums, and guidance for validation experiments are discussed. The increased variation in results when predicting regions of flow separation and increased variation due to interaction effects, e.g., fuselage and horizontal tail, point out the need for validation data sets for these important flow phenomena. Experiences with a recent validation experiment at NASA Langley are included to provide guidance on validation experiments.
Validation of biomarkers of food intake-critical assessment of candidate biomarkers.
Dragsted, L O; Gao, Q; Scalbert, A; Vergères, G; Kolehmainen, M; Manach, C; Brennan, L; Afman, L A; Wishart, D S; Andres Lacueva, C; Garcia-Aloy, M; Verhagen, H; Feskens, E J M; Praticò, G
2018-01-01
Biomarkers of food intake (BFIs) are a promising tool for limiting misclassification in nutrition research where more subjective dietary assessment instruments are used. They may also be used to assess compliance to dietary guidelines or to a dietary intervention. Biomarkers therefore hold promise for direct and objective measurement of food intake. However, the number of comprehensively validated biomarkers of food intake is limited to just a few. Many new candidate biomarkers emerge from metabolic profiling studies and from advances in food chemistry. Furthermore, candidate food intake biomarkers may also be identified based on extensive literature reviews such as described in the guidelines for Biomarker of Food Intake Reviews (BFIRev). To systematically and critically assess the validity of candidate biomarkers of food intake, it is necessary to outline and streamline an optimal and reproducible validation process. A consensus-based procedure was used to provide and evaluate a set of the most important criteria for systematic validation of BFIs. As a result, a validation procedure was developed including eight criteria, plausibility, dose-response, time-response, robustness, reliability, stability, analytical performance, and inter-laboratory reproducibility. The validation has a dual purpose: (1) to estimate the current level of validation of candidate biomarkers of food intake based on an objective and systematic approach and (2) to pinpoint which additional studies are needed to provide full validation of each candidate biomarker of food intake. This position paper on biomarker of food intake validation outlines the second step of the BFIRev procedure but may also be used as such for validation of new candidate biomarkers identified, e.g., in food metabolomic studies.
Systematic approach to verification and validation: High explosive burn models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menikoff, Ralph; Scovel, Christina A.
2012-04-16
Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the samemore » experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code, run a simulation, and generate a comparison plot showing simulated and experimental velocity gauge data. These scripts are then applied to several series of experiments and to several HE burn models. The same systematic approach is applicable to other types of material models; for example, equations of state models and material strength models.« less
Ban, Jong-Wook; Emparanza, José Ignacio; Urreta, Iratxe; Burls, Amanda
2016-01-01
Background Many new clinical prediction rules are derived and validated. But the design and reporting quality of clinical prediction research has been less than optimal. We aimed to assess whether design characteristics of validation studies were associated with the overestimation of clinical prediction rules’ performance. We also aimed to evaluate whether validation studies clearly reported important methodological characteristics. Methods Electronic databases were searched for systematic reviews of clinical prediction rule studies published between 2006 and 2010. Data were extracted from the eligible validation studies included in the systematic reviews. A meta-analytic meta-epidemiological approach was used to assess the influence of design characteristics on predictive performance. From each validation study, it was assessed whether 7 design and 7 reporting characteristics were properly described. Results A total of 287 validation studies of clinical prediction rule were collected from 15 systematic reviews (31 meta-analyses). Validation studies using case-control design produced a summary diagnostic odds ratio (DOR) 2.2 times (95% CI: 1.2–4.3) larger than validation studies using cohort design and unclear design. When differential verification was used, the summary DOR was overestimated by twofold (95% CI: 1.2 -3.1) compared to complete, partial and unclear verification. The summary RDOR of validation studies with inadequate sample size was 1.9 (95% CI: 1.2 -3.1) compared to studies with adequate sample size. Study site, reliability, and clinical prediction rule was adequately described in 10.1%, 9.4%, and 7.0% of validation studies respectively. Conclusion Validation studies with design shortcomings may overestimate the performance of clinical prediction rules. The quality of reporting among studies validating clinical prediction rules needs to be improved. PMID:26730980
Ban, Jong-Wook; Emparanza, José Ignacio; Urreta, Iratxe; Burls, Amanda
2016-01-01
Many new clinical prediction rules are derived and validated. But the design and reporting quality of clinical prediction research has been less than optimal. We aimed to assess whether design characteristics of validation studies were associated with the overestimation of clinical prediction rules' performance. We also aimed to evaluate whether validation studies clearly reported important methodological characteristics. Electronic databases were searched for systematic reviews of clinical prediction rule studies published between 2006 and 2010. Data were extracted from the eligible validation studies included in the systematic reviews. A meta-analytic meta-epidemiological approach was used to assess the influence of design characteristics on predictive performance. From each validation study, it was assessed whether 7 design and 7 reporting characteristics were properly described. A total of 287 validation studies of clinical prediction rule were collected from 15 systematic reviews (31 meta-analyses). Validation studies using case-control design produced a summary diagnostic odds ratio (DOR) 2.2 times (95% CI: 1.2-4.3) larger than validation studies using cohort design and unclear design. When differential verification was used, the summary DOR was overestimated by twofold (95% CI: 1.2 -3.1) compared to complete, partial and unclear verification. The summary RDOR of validation studies with inadequate sample size was 1.9 (95% CI: 1.2 -3.1) compared to studies with adequate sample size. Study site, reliability, and clinical prediction rule was adequately described in 10.1%, 9.4%, and 7.0% of validation studies respectively. Validation studies with design shortcomings may overestimate the performance of clinical prediction rules. The quality of reporting among studies validating clinical prediction rules needs to be improved.
T3SEdb: data warehousing of virulence effectors secreted by the bacterial Type III Secretion System.
Tay, Daniel Ming Ming; Govindarajan, Kunde Ramamoorthy; Khan, Asif M; Ong, Terenze Yao Rui; Samad, Hanif M; Soh, Wei Wei; Tong, Minyan; Zhang, Fan; Tan, Tin Wee
2010-10-15
Effectors of Type III Secretion System (T3SS) play a pivotal role in establishing and maintaining pathogenicity in the host and therefore the identification of these effectors is important in understanding virulence. However, the effectors display high level of sequence diversity, therefore making the identification a difficult process. There is a need to collate and annotate existing effector sequences in public databases to enable systematic analyses of these sequences for development of models for screening and selection of putative novel effectors from bacterial genomes that can be validated by a smaller number of key experiments. Herein, we present T3SEdb http://effectors.bic.nus.edu.sg/T3SEdb, a specialized database of annotated T3SS effector (T3SE) sequences containing 1089 records from 46 bacterial species compiled from the literature and public protein databases. Procedures have been defined for i) comprehensive annotation of experimental status of effectors, ii) submission and curation review of records by users of the database, and iii) the regular update of T3SEdb existing and new records. Keyword fielded and sequence searches (BLAST, regular expression) are supported for both experimentally verified and hypothetical T3SEs. More than 171 clusters of T3SEs were detected based on sequence identity comparisons (intra-cluster difference up to ~60%). Owing to this high level of sequence diversity of T3SEs, the T3SEdb provides a large number of experimentally known effector sequences with wide species representation for creation of effector predictors. We created a reliable effector prediction tool, integrated into the database, to demonstrate the application of the database for such endeavours. T3SEdb is the first specialised database reported for T3SS effectors, enriched with manual annotations that facilitated systematic construction of a reliable prediction model for identification of novel effectors. The T3SEdb represents a platform for inclusion of additional annotations of metadata for future developments of sophisticated effector prediction models for screening and selection of putative novel effectors from bacterial genomes/proteomes that can be validated by a small number of key experiments.
Becnel, Lauren B; Ochsner, Scott A; Darlington, Yolanda F; McOwiti, Apollo; Kankanamge, Wasula H; Dehart, Michael; Naumov, Alexey; McKenna, Neil J
2017-04-25
We previously developed a web tool, Transcriptomine, to explore expression profiling data sets involving small-molecule or genetic manipulations of nuclear receptor signaling pathways. We describe advances in biocuration, query interface design, and data visualization that enhance the discovery of uncharacterized biology in these pathways using this tool. Transcriptomine currently contains about 45 million data points encompassing more than 2000 experiments in a reference library of nearly 550 data sets retrieved from public archives and systematically curated. To make the underlying data points more accessible to bench biologists, we classified experimental small molecules and gene manipulations into signaling pathways and experimental tissues and cell lines into physiological systems and organs. Incorporation of these mappings into Transcriptomine enables the user to readily evaluate tissue-specific regulation of gene expression by nuclear receptor signaling pathways. Data points from animal and cell model experiments and from clinical data sets elucidate the roles of nuclear receptor pathways in gene expression events accompanying various normal and pathological cellular processes. In addition, data sets targeting non-nuclear receptor signaling pathways highlight transcriptional cross-talk between nuclear receptors and other signaling pathways. We demonstrate with specific examples how data points that exist in isolation in individual data sets validate each other when connected and made accessible to the user in a single interface. In summary, Transcriptomine allows bench biologists to routinely develop research hypotheses, validate experimental data, or model relationships between signaling pathways, genes, and tissues. Copyright © 2017, American Association for the Advancement of Science.
Fachi, Mariana Millan; Leonart, Letícia Paula; Cerqueira, Letícia Bonancio; Pontes, Flavia Lada Degaut; de Campos, Michel Leandro; Pontarolo, Roberto
2017-06-15
A systematic and critical review was conducted on bioanalytical methods validated to quantify combinations of antidiabetic agents in human blood. The aim of this article was to verify how the validation process of bioanalytical methods is performed and the quality of the published records. The validation assays were evaluated according to international guidelines. The main problems in the validation process are pointed out and discussed to help researchers to choose methods that are truly reliable and can be successfully applied for their intended use. The combination of oral antidiabetic agents was chosen as these are some of the most studied drugs and several methods are present in the literature. Moreover, this article may be applied to the validation process of all bioanalytical. Copyright © 2017 Elsevier B.V. All rights reserved.
Barrett, Eva; McCreesh, Karen; Lewis, Jeremy
2014-02-01
A wide array of instruments are available for non-invasive thoracic kyphosis measurement. Guidelines for selecting outcome measures for use in clinical and research practice recommend that properties such as validity and reliability are considered. This systematic review reports on the reliability and validity of non-invasive methods for measuring thoracic kyphosis. A systematic search of 11 electronic databases located studies assessing reliability and/or validity of non-invasive thoracic kyphosis measurement techniques. Two independent reviewers used a critical appraisal tool to assess the quality of retrieved studies. Data was extracted by the primary reviewer. The results were synthesized qualitatively using a level of evidence approach. 27 studies satisfied the eligibility criteria and were included in the review. The reliability, validity and both reliability and validity were investigated by sixteen, two and nine studies respectively. 17/27 studies were deemed to be of high quality. In total, 15 methods of thoracic kyphosis were evaluated in retrieved studies. All investigated methods showed high (ICC ≥ .7) to very high (ICC ≥ .9) levels of reliability. The validity of the methods ranged from low to very high. The strongest levels of evidence for reliability exists in support of the Debrunner kyphometer, Spinal Mouse and Flexicurve index, and for validity supports the arcometer and Flexicurve index. Further reliability and validity studies are required to strengthen the level of evidence for the remaining methods of measurement. This should be addressed by future research. Copyright © 2013 Elsevier Ltd. All rights reserved.
Thaden, Joshua T; Mogno, Ilaria; Wierzbowski, Jamey; Cottarel, Guillaume; Kasif, Simon; Collins, James J; Gardner, Timothy S
2007-01-01
Machine learning approaches offer the potential to systematically identify transcriptional regulatory interactions from a compendium of microarray expression profiles. However, experimental validation of the performance of these methods at the genome scale has remained elusive. Here we assess the global performance of four existing classes of inference algorithms using 445 Escherichia coli Affymetrix arrays and 3,216 known E. coli regulatory interactions from RegulonDB. We also developed and applied the context likelihood of relatedness (CLR) algorithm, a novel extension of the relevance networks class of algorithms. CLR demonstrates an average precision gain of 36% relative to the next-best performing algorithm. At a 60% true positive rate, CLR identifies 1,079 regulatory interactions, of which 338 were in the previously known network and 741 were novel predictions. We tested the predicted interactions for three transcription factors with chromatin immunoprecipitation, confirming 21 novel interactions and verifying our RegulonDB-based performance estimates. CLR also identified a regulatory link providing central metabolic control of iron transport, which we confirmed with real-time quantitative PCR. The compendium of expression data compiled in this study, coupled with RegulonDB, provides a valuable model system for further improvement of network inference algorithms using experimental data. PMID:17214507
Hooijmans, Carlijn R; Tillema, Alice; Leenaars, Marlies; Ritskes-Hoitinga, Merel
2010-01-01
Collecting and analysing all available literature before starting an animal experiment is important and it is indispensable when writing a systematic review (SR) of animal research. Writing such review prevents unnecessary duplication of animal studies and thus unnecessary animal use (Reduction). One of the factors currently impeding the production of ‘high-quality’ SRs in laboratory animal science is the fact that searching for all available literature concerning animal experimentation is rather difficult. In order to diminish these difficulties, we developed a search filter for PubMed to detect all publications concerning animal studies. This filter was compared with the method most frequently used, the PubMed Limit: Animals, and validated further by performing two PubMed topic searches. Our filter performs much better than the PubMed limit: it retrieves, on average, 7% more records. Other important advantages of our filter are that it also finds the most recent records and that it is easy to use. All in all, by using our search filter in PubMed, all available literature concerning animal studies on a specific topic can easily be found and assessed, which will help in increasing the scientific quality and thereby the ethical validity of animal experiments. PMID:20551243
Hooijmans, Carlijn R; Tillema, Alice; Leenaars, Marlies; Ritskes-Hoitinga, Merel
2010-07-01
Collecting and analysing all available literature before starting an animal experiment is important and it is indispensable when writing a systematic review (SR) of animal research. Writing such review prevents unnecessary duplication of animal studies and thus unnecessary animal use (Reduction). One of the factors currently impeding the production of 'high-quality' SRs in laboratory animal science is the fact that searching for all available literature concerning animal experimentation is rather difficult. In order to diminish these difficulties, we developed a search filter for PubMed to detect all publications concerning animal studies. This filter was compared with the method most frequently used, the PubMed Limit: Animals, and validated further by performing two PubMed topic searches. Our filter performs much better than the PubMed limit: it retrieves, on average, 7% more records. Other important advantages of our filter are that it also finds the most recent records and that it is easy to use. All in all, by using our search filter in PubMed, all available literature concerning animal studies on a specific topic can easily be found and assessed, which will help in increasing the scientific quality and thereby the ethical validity of animal experiments.
Szerkus, Oliwia; Struck-Lewicka, Wiktoria; Kordalewska, Marta; Bartosińska, Ewa; Bujak, Renata; Borsuk, Agnieszka; Bienert, Agnieszka; Bartkowska-Śniatkowska, Alicja; Warzybok, Justyna; Wiczling, Paweł; Nasal, Antoni; Kaliszan, Roman; Markuszewski, Michał Jan; Siluk, Danuta
2017-02-01
The purpose of this work was to develop and validate a rapid and robust LC-MS/MS method for the determination of dexmedetomidine (DEX) in plasma, suitable for analysis of a large number of samples. Systematic approach, Design of Experiments, was applied to optimize ESI source parameters and to evaluate method robustness, therefore, a rapid, stable and cost-effective assay was developed. The method was validated according to US FDA guidelines. LLOQ was determined at 5 pg/ml. The assay was linear over the examined concentration range (5-2500 pg/ml), Results: Experimental design approach was applied for optimization of ESI source parameters and evaluation of method robustness. The method was validated according to the US FDA guidelines. LLOQ was determined at 5 pg/ml. The assay was linear over the examined concentration range (R 2 > 0.98). The accuracies, intra- and interday precisions were less than 15%. The stability data confirmed reliable behavior of DEX under tested conditions. Application of Design of Experiments approach allowed for fast and efficient analytical method development and validation as well as for reduced usage of chemicals necessary for regular method optimization. The proposed technique was applied to determination of DEX pharmacokinetics in pediatric patients undergoing long-term sedation in the intensive care unit.
NASA Astrophysics Data System (ADS)
Huang, Yan-Hua; Yang, Sheng-Qi; Zhao, Jian
2016-12-01
A three-dimensional particle flow code (PFC3D) was used for a systematic numerical simulation of the strength failure and cracking behavior of rock-like material specimens containing two unparallel fissures under conventional triaxial compression. The micro-parameters of the parallel bond model were first calibrated using the laboratory results of intact specimens and then validated from the experimental results of pre-fissured specimens under triaxial compression. Numerically simulated stress-strain curves, strength and deformation parameters and macro-failure modes of pre-fissured specimens were all in good agreement with the experimental results. The relationship between stress and the micro-crack numbers was summarized. Crack initiation, propagation and coalescence process of pre-fissured specimens were analyzed in detail. Finally, horizontal and vertical cross sections of numerical specimens were derived from PFC3D. A detailed analysis to reveal the internal damage behavior of rock under triaxial compression was carried out. The experimental and simulated results are expected to improve the understanding of the strength failure and cracking behavior of fractured rock under triaxial compression.
Numerical simulation of turbulent gas flames in tubes.
Salzano, E; Marra, F S; Russo, G; Lee, J H S
2002-12-02
Computational fluid dynamics (CFD) is an emerging technique to predict possible consequences of gas explosion and it is often considered a powerful and accurate tool to obtain detailed results. However, systematic analyses of the reliability of this approach to real-scale industrial configurations are still needed. Furthermore, few experimental data are available for comparison and validation. In this work, a set of well documented experimental data related to the flame acceleration obtained within obstacle-filled tubes filled with flammable gas-air mixtures, has been simulated. In these experiments, terminal steady flame speeds corresponding to different propagation regimes were observed, thus, allowing a clear and prompt characterisation of the numerical results with respect to numerical parameters, as grid definition, geometrical parameters, as blockage ratio and to mixture parameters, as mixture reactivity. The CFD code AutoReagas was used for the simulations. Numerical predictions were compared with available experimental data and some insights into the code accuracy were determined. Computational results are satisfactory for the relatively slower turbulent deflagration regimes and became fair when choking regime is observed, whereas transition to quasi-detonation or Chapman-Jogouet (CJ) were never predicted.
Short-Term Experiments on Ion Transport by Seedlings and Excised Roots 1
Huang, Zhang-Zhi; Yan, Xiaolong; Jalil, Abdul; Norlyn, Jack D.; Epstein, Emanuel
1992-01-01
The absorption of K+ by excised roots of barley (Hordeum vulgare L. cv California Mariout) has been systematically compared with that of entire, undisturbed seedlings. Some experiments have also been done with wheat (Triticum aestivum L.) and an amphiploid obtained from a cross between it and salt-tolerant tall wheatgrass (Lophopyrum elongatum Host Löve [syn. Agropyron elongatum Host]). For all three genotypes, the rate of K+ absorption measured in a 20-min period was identical for entire 8-d-old seedlings and their excised roots within the experimental error. Manipulation gentler than root excision, viz. careful transfer of seedlings from one experimental solution to another, was also without effect on the rate of K+ absorption. Absorption of K+ measured by assay of its 86Rb label in the tissue was identical with that measured by K+ depletion of the experimental solutions assayed chemically. For the plant materials and conditions of these experiments, the excised root technique for studying ion transport into roots is validated. The advantages of the technique, and findings differing from the present ones, are discussed. Images Figure 2 PMID:16653217
NASA Astrophysics Data System (ADS)
Lozano, A. I.; Oller, J. C.; Krupa, K.; Ferreira da Silva, F.; Limão-Vieira, P.; Blanco, F.; Muñoz, A.; Colmenares, R.; García, G.
2018-06-01
A novel experimental setup has been implemented to provide accurate electron scattering cross sections from molecules at low and intermediate impact energies (1-300 eV) by measuring the attenuation of a magnetically confined linear electron beam from a molecular target. High-resolution electron energy is achieved through confinement in a magnetic gas trap where electrons are cooled by successive collisions with N2. Additionally, we developed and present a method to correct systematic errors arising from energy and angular resolution limitations. The accuracy of the entire measurement procedure is validated by comparing the N2 total scattering cross section in the considered energy range with benchmark values available in the literature.
Tavares, Leoberto Costa; do Amaral, Antonia Tavares
2004-03-15
It was determined, with a systematic mode, the carbonyl group frequency in the region of the infrared of N-[(dimethylamine)methyl] benzamides 4-substituted (set A) and their hydrochlorides (set B), that had its local anesthetical activity evaluated. The application of the Hammett equation considering the values of the absorption frequency of carbonyl group, nu(C=O,) using the electronic constants sigma, sigma(I), sigma(R), I and R leads to meaningful correlation. The nature and the contribution of substituent group electronic effects on the polarity of the carbonyl group was also analyzed. The use of the nu(C=O) as an experimental electronic parameter for QSPR studies was validated.
Henderson, Valerie C; Kimmelman, Jonathan; Fergusson, Dean; Grimshaw, Jeremy M; Hackam, Dan G
2013-01-01
The vast majority of medical interventions introduced into clinical development prove unsafe or ineffective. One prominent explanation for the dismal success rate is flawed preclinical research. We conducted a systematic review of preclinical research guidelines and organized recommendations according to the type of validity threat (internal, construct, or external) or programmatic research activity they primarily address. We searched MEDLINE, Google Scholar, Google, and the EQUATOR Network website for all preclinical guideline documents published up to April 9, 2013 that addressed the design and conduct of in vivo animal experiments aimed at supporting clinical translation. To be eligible, documents had to provide guidance on the design or execution of preclinical animal experiments and represent the aggregated consensus of four or more investigators. Data from included guidelines were independently extracted by two individuals for discrete recommendations on the design and implementation of preclinical efficacy studies. These recommendations were then organized according to the type of validity threat they addressed. A total of 2,029 citations were identified through our search strategy. From these, we identified 26 guidelines that met our eligibility criteria--most of which were directed at neurological or cerebrovascular drug development. Together, these guidelines offered 55 different recommendations. Some of the most common recommendations included performance of a power calculation to determine sample size, randomized treatment allocation, and characterization of disease phenotype in the animal model prior to experimentation. By identifying the most recurrent recommendations among preclinical guidelines, we provide a starting point for developing preclinical guidelines in other disease domains. We also provide a basis for the study and evaluation of preclinical research practice. Please see later in the article for the Editors' Summary.
Décary, Simon; Ouellet, Philippe; Vendittoli, Pascal-André; Roy, Jean-Sébastien; Desmeules, François
2017-01-01
More evidence on diagnostic validity of physical examination tests for knee disorders is needed to lower frequently used and costly imaging tests. To conduct a systematic review of systematic reviews (SR) and meta-analyses (MA) evaluating the diagnostic validity of physical examination tests for knee disorders. A structured literature search was conducted in five databases until January 2016. Methodological quality was assessed using the AMSTAR. Seventeen reviews were included with mean AMSTAR score of 5.5 ± 2.3. Based on six SR, only the Lachman test for ACL injuries is diagnostically valid when individually performed (Likelihood ratio (LR+):10.2, LR-:0.2). Based on two SR, the Ottawa Knee Rule is a valid screening tool for knee fractures (LR-:0.05). Based on one SR, the EULAR criteria had a post-test probability of 99% for the diagnosis of knee osteoarthritis. Based on two SR, a complete physical examination performed by a trained health provider was found to be diagnostically valid for ACL, PCL and meniscal injuries as well as for cartilage lesions. When individually performed, common physical tests are rarely able to rule in or rule out a specific knee disorder, except the Lachman for ACL injuries. There is low-quality evidence concerning the validity of combining history elements and physical tests. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dichter, Martin Nikolaus; Schwab, Christian G G; Meyer, Gabriele; Bartholomeyczik, Sabine; Halek, Margareta
2016-02-01
For people with dementia, the concept of quality of life (Qol) reflects the disease's impact on the whole person. Thus, Qol is an increasingly used outcome measure in dementia research. This systematic review was performed to identify available dementia-specific Qol measurements and to assess the quality of linguistic validations and reliability studies of these measurements (PROSPERO 2013: CRD42014008725). The MEDLINE, CINAHL, EMBASE, PsycINFO, and Cochrane Methodology Register databases were systematically searched without any date restrictions. Forward and backward citation tracking were performed on the basis of selected articles. A total of 70 articles addressing 19 dementia-specific Qol measurements were identified; nine measurements were adapted to nonorigin countries. The quality of the linguistic validations varied from insufficient to good. Internal consistency was the most frequently tested reliability property. Most of the reliability studies lacked internal validity. Qol measurements for dementia are insufficiently linguistic validated and not well tested for reliability. None of the identified measurements can be recommended without further research. The application of international guidelines and quality criteria is strongly recommended for the performance of linguistic validations and reliability studies of dementia-specific Qol measurements. Copyright © 2016 Elsevier Inc. All rights reserved.
Systematic Omics Analysis Review (SOAR) Tool to Support Risk Assessment
McConnell, Emma R.; Bell, Shannon M.; Cote, Ila; Wang, Rong-Lin; Perkins, Edward J.; Garcia-Reyero, Natàlia; Gong, Ping; Burgoon, Lyle D.
2014-01-01
Environmental health risk assessors are challenged to understand and incorporate new data streams as the field of toxicology continues to adopt new molecular and systems biology technologies. Systematic screening reviews can help risk assessors and assessment teams determine which studies to consider for inclusion in a human health assessment. A tool for systematic reviews should be standardized and transparent in order to consistently determine which studies meet minimum quality criteria prior to performing in-depth analyses of the data. The Systematic Omics Analysis Review (SOAR) tool is focused on assisting risk assessment support teams in performing systematic reviews of transcriptomic studies. SOAR is a spreadsheet tool of 35 objective questions developed by domain experts, focused on transcriptomic microarray studies, and including four main topics: test system, test substance, experimental design, and microarray data. The tool will be used as a guide to identify studies that meet basic published quality criteria, such as those defined by the Minimum Information About a Microarray Experiment standard and the Toxicological Data Reliability Assessment Tool. Seven scientists were recruited to test the tool by using it to independently rate 15 published manuscripts that study chemical exposures with microarrays. Using their feedback, questions were weighted based on importance of the information and a suitability cutoff was set for each of the four topic sections. The final validation resulted in 100% agreement between the users on four separate manuscripts, showing that the SOAR tool may be used to facilitate the standardized and transparent screening of microarray literature for environmental human health risk assessment. PMID:25531884
Role of metabolism and viruses in aflatoxin-induced liver cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groopman, John D.; Kensler, Thomas W.
The use of biomarkers in molecular epidemiology studies for identifying stages in the progression of development of the health effects of environmental agents has the potential for providing important information for critical regulatory, clinical and public health problems. Investigations of aflatoxins probably represent one of the most extensive data sets in the field and this work may serve as a template for future studies of other environmental agents. The aflatoxins are naturally occurring mycotoxins found on foods such as corn, peanuts, various other nuts and cottonseed and they have been demonstrated to be carcinogenic in many experimental models. As amore » result of nearly 30 years of study, experimental data and epidemiological studies in human populations, aflatoxin B{sub 1} was classified as carcinogenic to humans by the International Agency for Research on Cancer. The long-term goal of the research described herein is the application of biomarkers to the development of preventative interventions for use in human populations at high-risk for cancer. Several of the aflatoxin-specific biomarkers have been validated in epidemiological studies and are now being used as intermediate biomarkers in prevention studies. The development of these aflatoxin biomarkers has been based upon the knowledge of the biochemistry and toxicology of aflatoxins gleaned from both experimental and human studies. These biomarkers have subsequently been utilized in experimental models to provide data on the modulation of these markers under different situations of disease risk. This systematic approach provides encouragement for preventive interventions and should serve as a template for the development, validation and application of other chemical-specific biomarkers to cancer or other chronic diseases.« less
Validation of a sampling plan to generate food composition data.
Sammán, N C; Gimenez, M A; Bassett, N; Lobo, M O; Marcoleri, M E
2016-02-15
A methodology to develop systematic plans for food sampling was proposed. Long life whole and skimmed milk, and sunflower oil were selected to validate the methodology in Argentina. Fatty acid profile in all foods, proximal composition, and calcium's content in milk were determined with AOAC methods. The number of samples (n) was calculated applying Cochran's formula with variation coefficients ⩽12% and an estimate error (r) maximum permissible ⩽5% for calcium content in milks and unsaturated fatty acids in oil. n were 9, 11 and 21 for long life whole and skimmed milk, and sunflower oil respectively. Sample units were randomly collected from production sites and sent to labs. Calculated r with experimental data was ⩽10%, indicating high accuracy in the determination of analyte content of greater variability and reliability of the proposed sampling plan. The methodology is an adequate and useful tool to develop sampling plans for food composition analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.
The Fast Scattering Code (FSC): Validation Studies and Program Guidelines
NASA Technical Reports Server (NTRS)
Tinetti, Ana F.; Dunn, Mark H.
2011-01-01
The Fast Scattering Code (FSC) is a frequency domain noise prediction program developed at the NASA Langley Research Center (LaRC) to simulate the acoustic field produced by the interaction of known, time harmonic incident sound with bodies of arbitrary shape and surface impedance immersed in a potential flow. The code uses the equivalent source method (ESM) to solve an exterior 3-D Helmholtz boundary value problem (BVP) by expanding the scattered acoustic pressure field into a series of point sources distributed on a fictitious surface placed inside the actual scatterer. This work provides additional code validation studies and illustrates the range of code parameters that produce accurate results with minimal computational costs. Systematic noise prediction studies are presented in which monopole generated incident sound is scattered by simple geometric shapes - spheres (acoustically hard and soft surfaces), oblate spheroids, flat disk, and flat plates with various edge topologies. Comparisons between FSC simulations and analytical results and experimental data are presented.
The reliability and validity of ultrasound to quantify muscles in older adults: a systematic review
Scafoglieri, Aldo; Jager‐Wittenaar, Harriët; Hobbelen, Johannes S.M.; van der Schans, Cees P.
2017-01-01
Abstract This review evaluates the reliability and validity of ultrasound to quantify muscles in older adults. The databases PubMed, Cochrane, and Cumulative Index to Nursing and Allied Health Literature were systematically searched for studies. In 17 studies, the reliability (n = 13) and validity (n = 8) of ultrasound to quantify muscles in community‐dwelling older adults (≥60 years) or a clinical population were evaluated. Four out of 13 reliability studies investigated both intra‐rater and inter‐rater reliability. Intraclass correlation coefficient (ICC) scores for reliability ranged from −0.26 to 1.00. The highest ICC scores were found for the vastus lateralis, rectus femoris, upper arm anterior, and the trunk (ICC = 0.72 to 1.000). All included validity studies found ICC scores ranging from 0.92 to 0.999. Two studies describing the validity of ultrasound to predict lean body mass showed good validity as compared with dual‐energy X‐ray absorptiometry (r 2 = 0.92 to 0.96). This systematic review shows that ultrasound is a reliable and valid tool for the assessment of muscle size in older adults. More high‐quality research is required to confirm these findings in both clinical and healthy populations. Furthermore, ultrasound assessment of small muscles needs further evaluation. Ultrasound to predict lean body mass is feasible; however, future research is required to validate prediction equations in older adults with varying function and health. PMID:28703496
Breeze, John; Clasper, J C
2013-12-01
Explosively propelled fragments are the most common cause of injury to soldiers on current operations. Researchers desire models to predict their injurious effects so as to refine methods of potential protection. Well validated physical and numerical models based on the penetration of standardised fragment simulating projectiles (FSPs) through muscle exist but not for skin, thereby reducing the utility of such models. A systematic review of the literature was undertaken using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses methodology to identify all open source information quantifying the effects of postmortem human subject (PMHS) and animal skin on the retardation of metallic projectiles. Projectile sectional density (mass over presented cross-sectional area) was compared with the velocity required for skin perforation or penetration, with regard to skin origin (animal vs PMHS), projectile shape (sphere vs cylinder) and skin backing (isolated skin vs that backed by muscle). 17 original experimental studies were identified, predominantly using skin from the thigh. No statistical difference in the velocity required for skin perforation with regard to skin origin or projectile shape was found. A greater velocity was required to perforate intact skin on a whole limb than isolated skin alone (p<0.05). An empirical relationship describing the velocity required to perforate skin by metallic FSPs of a range of sectional densities was generated. Skin has a significant effect on the retardation of FSPs, necessitating its incorporation in future injury models. Perforation algorithms based on animal and PMHS skin can be used interchangeably as well as spheres and cylinders of matching sectional density. Future numerical simulations for skin perforation must match the velocity for penetration and also require experimental determination of mechanical skin properties, such as tensile strength, strain and elasticity at high strain rates.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... experimental conditions for the validation study and subsequent use during decontamination. The following experimental conditions apply for any solvent: (a) Temperature and pressure. Conduct the validation study and...
Code of Federal Regulations, 2012 CFR
2012-07-01
... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... experimental conditions for the validation study and subsequent use during decontamination. The following experimental conditions apply for any solvent: (a) Temperature and pressure. Conduct the validation study and...
Code of Federal Regulations, 2013 CFR
2013-07-01
... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... experimental conditions for the validation study and subsequent use during decontamination. The following experimental conditions apply for any solvent: (a) Temperature and pressure. Conduct the validation study and...
A simplified approach to characterizing a kilovoltage source spectrum for accurate dose computation.
Poirier, Yannick; Kouznetsov, Alexei; Tambasco, Mauro
2012-06-01
To investigate and validate the clinical feasibility of using half-value layer (HVL) and peak tube potential (kVp) for characterizing a kilovoltage (kV) source spectrum for the purpose of computing kV x-ray dose accrued from imaging procedures. To use this approach to characterize a Varian® On-Board Imager® (OBI) source and perform experimental validation of a novel in-house hybrid dose computation algorithm for kV x-rays. We characterized the spectrum of an imaging kV x-ray source using the HVL and the kVp as the sole beam quality identifiers using third-party freeware Spektr to generate the spectra. We studied the sensitivity of our dose computation algorithm to uncertainties in the beam's HVL and kVp by systematically varying these spectral parameters. To validate our approach experimentally, we characterized the spectrum of a Varian® OBI system by measuring the HVL using a Farmer-type Capintec ion chamber (0.06 cc) in air and compared dose calculations using our computationally validated in-house kV dose calculation code to measured percent depth-dose and transverse dose profiles for 80, 100, and 125 kVp open beams in a homogeneous phantom and a heterogeneous phantom comprising tissue, lung, and bone equivalent materials. The sensitivity analysis of the beam quality parameters (i.e., HVL, kVp, and field size) on dose computation accuracy shows that typical measurement uncertainties in the HVL and kVp (±0.2 mm Al and ±2 kVp, respectively) source characterization parameters lead to dose computation errors of less than 2%. Furthermore, for an open beam with no added filtration, HVL variations affect dose computation accuracy by less than 1% for a 125 kVp beam when field size is varied from 5 × 5 cm(2) to 40 × 40 cm(2). The central axis depth dose calculations and experimental measurements for the 80, 100, and 125 kVp energies agreed within 2% for the homogeneous and heterogeneous block phantoms, and agreement for the transverse dose profiles was within 6%. The HVL and kVp are sufficient for characterizing a kV x-ray source spectrum for accurate dose computation. As these parameters can be easily and accurately measured, they provide for a clinically feasible approach to characterizing a kV energy spectrum to be used for patient specific x-ray dose computations. Furthermore, these results provide experimental validation of our novel hybrid dose computation algorithm. © 2012 American Association of Physicists in Medicine.
de Vries, C E E; Kalff, M C; Prinsen, C A C; Coulman, K D; den Haan, C; Welbourn, R; Blazeby, J M; Morton, J M; van Wagensveld, B A
2018-06-08
The objective of this study is to systematically assess the quality of existing patient-reported outcome measures developed and/or validated for Quality of Life measurement in bariatric surgery (BS) and body contouring surgery (BCS). We conducted a systematic literature search in PubMed, EMBASE, PsycINFO, CINAHL, Cochrane Database Systematic Reviews and CENTRAL identifying studies on measurement properties of BS and BCS Quality of Life instruments. For all eligible studies, we evaluated the methodological quality of the studies by using the COnsensus-based Standards for the selection of health Measurement INstruments checklist and the quality of the measurement instruments by applying quality criteria. Four degrees of recommendation were assigned to validated instruments (A-D). Out of 4,354 articles, a total of 26 articles describing 24 instruments were included. No instrument met all requirements (category A). Seven instruments have the potential to be recommended depending on further validation studies (category B). Of these seven, the BODY-Q has the strongest evidence for content validity in BS and BCS. Two instruments had poor quality in at least one required quality criterion (category C). Fifteen instruments were minimally validated (category D). The BODY-Q, developed for BS and BCS, possessed the strongest evidence for quality of measurement properties and has the potential to be recommended in future clinical trials. © 2018 The Authors. Obesity Reviews published by John Wiley & Sons Ltd on behalf of World Obesity Federation.
Semi-automating the manual literature search for systematic reviews increases efficiency.
Chapman, Andrea L; Morgan, Laura C; Gartlehner, Gerald
2010-03-01
To minimise retrieval bias, manual literature searches are a key part of the search process of any systematic review. Considering the need to have accurate information, valid results of the manual literature search are essential to ensure scientific standards; likewise efficient approaches that minimise the amount of personnel time required to conduct a manual literature search are of great interest. The objective of this project was to determine the validity and efficiency of a new manual search method that utilises the scopus database. We used the traditional manual search approach as the gold standard to determine the validity and efficiency of the proposed scopus method. Outcome measures included completeness of article detection and personnel time involved. Using both methods independently, we compared the results based on accuracy of the results, validity and time spent conducting the search, efficiency. Regarding accuracy, the scopus method identified the same studies as the traditional approach indicating its validity. In terms of efficiency, using scopus led to a time saving of 62.5% compared with the traditional approach (3 h versus 8 h). The scopus method can significantly improve the efficiency of manual searches and thus of systematic reviews.
7 CFR 272.11 - Systematic Alien Verification for Entitlements (SAVE) Program.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 4 2013-01-01 2013-01-01 false Systematic Alien Verification for Entitlements (SAVE... FOR PARTICIPATING STATE AGENCIES § 272.11 Systematic Alien Verification for Entitlements (SAVE... and Naturalization Service (INS), in order to verify the validity of documents provided by aliens...
7 CFR 272.11 - Systematic Alien Verification for Entitlements (SAVE) Program.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 4 2012-01-01 2012-01-01 false Systematic Alien Verification for Entitlements (SAVE... FOR PARTICIPATING STATE AGENCIES § 272.11 Systematic Alien Verification for Entitlements (SAVE... and Naturalization Service (INS), in order to verify the validity of documents provided by aliens...
7 CFR 272.11 - Systematic Alien Verification for Entitlements (SAVE) Program.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 4 2010-01-01 2010-01-01 false Systematic Alien Verification for Entitlements (SAVE... FOR PARTICIPATING STATE AGENCIES § 272.11 Systematic Alien Verification for Entitlements (SAVE... and Naturalization Service (INS), in order to verify the validity of documents provided by aliens...
7 CFR 272.11 - Systematic Alien Verification for Entitlements (SAVE) Program.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 4 2011-01-01 2011-01-01 false Systematic Alien Verification for Entitlements (SAVE... FOR PARTICIPATING STATE AGENCIES § 272.11 Systematic Alien Verification for Entitlements (SAVE... and Naturalization Service (INS), in order to verify the validity of documents provided by aliens...
7 CFR 272.11 - Systematic Alien Verification for Entitlements (SAVE) Program.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 4 2014-01-01 2014-01-01 false Systematic Alien Verification for Entitlements (SAVE... FOR PARTICIPATING STATE AGENCIES § 272.11 Systematic Alien Verification for Entitlements (SAVE... and Naturalization Service (INS), in order to verify the validity of documents provided by aliens...
Assessing the validity of subjective reports in the auditory streaming paradigm.
Farkas, Dávid; Denham, Susan L; Bendixen, Alexandra; Winkler, István
2016-04-01
While subjective reports provide a direct measure of perception, their validity is not self-evident. Here, the authors tested three possible biasing effects on perceptual reports in the auditory streaming paradigm: errors due to imperfect understanding of the instructions, voluntary perceptual biasing, and susceptibility to implicit expectations. (1) Analysis of the responses to catch trials separately promoting each of the possible percepts allowed the authors to exclude participants who likely have not fully understood the instructions. (2) Explicit biasing instructions led to markedly different behavior than the conventional neutral-instruction condition, suggesting that listeners did not voluntarily bias their perception in a systematic way under the neutral instructions. Comparison with a random response condition further supported this conclusion. (3) No significant relationship was found between social desirability, a scale-based measure of susceptibility to implicit social expectations, and any of the perceptual measures extracted from the subjective reports. This suggests that listeners did not significantly bias their perceptual reports due to possible implicit expectations present in the experimental context. In sum, these results suggest that valid perceptual data can be obtained from subjective reports in the auditory streaming paradigm.
Impact of mismatched and misaligned laser light sheet profiles on PIV performance
NASA Astrophysics Data System (ADS)
Grayson, K.; de Silva, C. M.; Hutchins, N.; Marusic, I.
2018-01-01
The effect of mismatched or misaligned laser light sheet profiles on the quality of particle image velocimetry (PIV) results is considered in this study. Light sheet profiles with differing widths, shapes, or alignment can reduce the correlation between PIV images and increase experimental errors. Systematic PIV simulations isolate these behaviours to assess the sensitivity and implications of light sheet mismatch on measurements. The simulations in this work use flow fields from a turbulent boundary layer; however, the behaviours and impacts of laser profile mismatch are highly relevant to any fluid flow or PIV application. Experimental measurements from a turbulent boundary layer facility are incorporated, as well as additional simulations matched to experimental image characteristics, to validate the synthetic image analysis. Experimental laser profiles are captured using a modular laser profiling camera, designed to quantify the distribution of laser light sheet intensities and inform any corrective adjustments to an experimental configuration. Results suggest that an offset of just 1.35 standard deviations in the Gaussian light sheet intensity distributions can cause a 40% reduction in the average correlation coefficient and a 45% increase in spurious vectors. Errors in measured flow statistics are also amplified when two successive laser profiles are no longer well matched in alignment or intensity distribution. Consequently, an awareness of how laser light sheet overlap influences PIV results can guide faster setup of an experiment, as well as achieve superior experimental measurements.
Carnahan, Heather; Herold, Jodi
2015-01-01
ABSTRACT Purpose: To review the literature on simulation-based learning experiences and to examine their potential to have a positive impact on physiotherapy (PT) learners' knowledge, skills, and attitudes in entry-to-practice curricula. Method: A systematic literature search was conducted in the MEDLINE, CINAHL, Embase Classic+Embase, Scopus, and Web of Science databases, using keywords such as physical therapy, simulation, education, and students. Results: A total of 820 abstracts were screened, and 23 articles were included in the systematic review. While there were few randomized controlled trials with validated outcome measures, some discoveries about simulation can positively affect the design of the PT entry-to-practice curricula. Using simulators to provide specific output feedback can help students learn specific skills. Computer simulations can also augment students' learning experience. Human simulation experiences in managing the acute patient in the ICU are well received by students, positively influence their confidence, and decrease their anxiety. There is evidence that simulated learning environments can replace a portion of a full-time 4-week clinical rotation without impairing learning. Conclusions: Simulation-based learning activities are being effectively incorporated into PT curricula. More rigorously designed experimental studies that include a cost–benefit analysis are necessary to help curriculum developers make informed choices in curriculum design. PMID:25931672
A Systematic Approach for Real-Time Operator Functional State Assessment
NASA Technical Reports Server (NTRS)
Zhang, Guangfan; Wang, Wei; Pepe, Aaron; Xu, Roger; Schnell, Thomas; Anderson, Nick; Heitkamp, Dean; Li, Jiang; Li, Feng; McKenzie, Frederick
2012-01-01
A task overload condition often leads to high stress for an operator, causing performance degradation and possibly disastrous consequences. Just as dangerous, with automated flight systems, an operator may experience a task underload condition (during the en-route flight phase, for example), becoming easily bored and finding it difficult to maintain sustained attention. When an unexpected event occurs, either internal or external to the automated system, the disengaged operator may neglect, misunderstand, or respond slowly/inappropriately to the situation. In this paper, we discuss an approach for Operator Functional State (OFS) monitoring in a typical aviation environment. A systematic ground truth finding procedure has been designed based on subjective evaluations, performance measures, and strong physiological indicators. The derived OFS ground truth is continuous in time compared to a very sparse estimation of OFS based on an expert review or subjective evaluations. It can capture the variations of OFS during a mission to better guide through the training process of the OFS assessment model. Furthermore, an OFS assessment model framework based on advanced machine learning techniques was designed and the systematic approach was then verified and validated with experimental data collected in a high fidelity Boeing 737 simulator. Preliminary results show highly accurate engagement/disengagement detection making it suitable for real-time applications to assess pilot engagement.
NASA Astrophysics Data System (ADS)
Smits, Kathleen M.; Ngo, Viet V.; Cihan, Abdullah; Sakaki, Toshihiro; Illangasekare, Tissa H.
2012-12-01
Bare soil evaporation is a key process for water exchange between the land and the atmosphere and an important component of the water balance. However, there is no agreement on the best modeling methodology to determine evaporation under different atmospheric boundary conditions. Also, there is a lack of directly measured soil evaporation data for model validation to compare these methods to establish the validity of their mathematical formulations. Thus, a need exists to systematically compare evaporation estimates using existing methods to experimental observations. The goal of this work is to test different conceptual and mathematical formulations that are used to estimate evaporation from bare soils to critically investigate various formulations and surface boundary conditions. Such a comparison required the development of a numerical model that has the ability to incorporate these boundary conditions. For this model, we modified a previously developed theory that allows nonequilibrium liquid/gas phase change with gas phase vapor diffusion to better account for dry soil conditions. Precision data under well-controlled transient heat and wind boundary conditions were generated, and results from numerical simulations were compared with experimental data. Results demonstrate that the approaches based on different boundary conditions varied in their ability to capture different stages of evaporation. All approaches have benefits and limitations, and no one approach can be deemed most appropriate for every scenario. Comparisons of different formulations of the surface boundary condition validate the need for further research on heat and vapor transport processes in soil for better modeling accuracy.
NASA Astrophysics Data System (ADS)
Parnis, J. Mark; Mackay, Donald; Harner, Tom
2015-06-01
Henry's Law constants (H) and octanol-air partition coefficients (KOA) for polycyclic aromatic hydrocarbons (PAHs) and selected nitrogen-, oxygen- and sulfur-containing derivatives have been computed using the COSMO-RS method between -5 and 40 °C in 5 °C intervals. The accuracy of the estimation was assessed by comparison of COSMOtherm values with published experimental temperature-dependence data for these and similar PAHs. COSMOtherm log H estimates with temperature-variation for parent PAHs are shown to have a root-mean-square (RMS) error of 0.38 (PAH), based on available validation data. Estimates of O-, N- and S-substituted derivative log H values are found to have RMS errors of 0.30 at 25 °C. Log KOA estimates with temperature variation from COSMOtherm are shown to be strongly correlated with experimental values for a small set of unsubstituted PAHs, but with a systematic underestimation and associated RMS error of 1.11. Similar RMS error of 1.64 was found for COSMO-RS estimates of a group of critically-evaluated log KOA values at room temperature. Validation demonstrates that COSMOtherm estimates of H and KOA are of sufficient accuracy to be used for property screening and preliminary environmental risk assessment, and perform very well for modeling the influence of temperature on partitioning behavior in the temperature range -5 to 40 °C. Temperature-dependent shifts of up to 2 log units in log H and one log unit for log KOA are predicted for PAH species over the range -5 and 40 °C. Within the family of PAH molecules, COSMO-RS is sufficiently accurate to make it useful as a source of estimates for modeling purposes, following corrections for systematic underestimation of KOA. Average changes in the values for log H and log KOA upon substitution are given for various PAH substituent categories, with the most significant shifts being associated with the ionizing nitro functionality and keto groups.
de Klerk, Susan; Buchanan, Helen; Jerosch-Herold, Christina
Systematic review. The Disabilities of the Arm Shoulder and Hand Questionnaire has multiple language versions from many countries around the world. In addition there is extensive research evidence of its psychometric properties. The purpose of this study was to systematically review the evidence available on the validity and clinical utility of the Disabilities of the Arm Shoulder and Hand as a measure of activity and participation in patients with musculoskeletal hand injuries in developing country contexts. We registered the review with international prospective register of systematic reviews prior to conducting a comprehensive literature search and extracting descriptive data. Two reviewers independently assessed methodological quality with the Consensus-Based Standards for the Selection of Health Measurement Instruments critical appraisal tool, the checklist to operationalize measurement characteristics of patient-rated outcome measures and the multidimensional model of clinical utility. Fourteen studies reporting 12 language versions met the eligibility criteria. Two language versions (Persian and Turkish) had an overall rating of good, and one (Thai) had an overall rating of excellent for cross-cultural validity. The remaining 9 language versions had an overall poor rating for cross-cultural validity. Content and construct validity and clinical utility yielded similar results. Poor quality ratings for validity and clinical utility were due to insufficient documentation of results and inadequate psychometric testing. With the increase in migration and globalization, hand therapists are likely to require a range of culturally adapted and translated versions of the Disabilities of the Arm Shoulder and Hand. Recommendations include rigorous application and reporting of cross-cultural adaptation, appropriate psychometric testing, and testing of clinical utility in routine clinical practice. Copyright © 2017 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.
Keil, Lukas G; Platts-Mills, Timothy F; Jones, Christopher W
2015-10-01
Publication bias compromises the validity of systematic reviews. This problem can be addressed in part through searching clinical trials registries to identify unpublished studies. This study aims to determine how often systematic reviews published in emergency medicine journals include clinical trials registry searches. We identified all systematic reviews published in the 6 highest-impact emergency medicine journals between January 1 and December 31, 2013. Systematic reviews that assessed the effects of an intervention were further examined to determine whether the authors described searching a clinical trials registry and whether this search identified relevant unpublished studies. Of 191 articles identified through PubMed search, 80 were confirmed to be systematic reviews. Our sample consisted of 41 systematic reviews that assessed a specific intervention. Eight of these 41 (20%) searched a clinical trials registry. For 4 of these 8 reviews, the registry search identified at least 1 relevant unpublished study. Systematic reviews published in emergency medicine journals do not routinely include searches of clinical trials registries. By helping authors identify unpublished trial data, the addition of registry searches may improve the validity of systematic reviews. Copyright © 2014 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
Losada-Espinosa, Natyieli; Villarroel, Morris; María, Gustavo A; Miranda-de la Lama, Genaro C
2018-04-01
Animal welfare has become an important subject of public, economic and political concern, leading to the need to validate indicators that are feasible to use at abattoirs. A systematic review was carried out, which identified 72 cattle welfare indicators (CWI) that were classified into four categories (physiological, morphometric, behavioral and meat quality). Their validity and feasibility for use in abattoirs were evaluated as potential measures of cattle welfare during transportation to the abattoir and at the abattoir itself. Several highly valid indicators were identified that are useful to assess welfare at abattoirs, including body condition score, human-animal interactions, vocalizations, falling, carcass bruising, and meat pH. In addition, some intermediate valid indicators are useful and should be investigated further. Information along the food chain could be used systematically to provide a basis for a more-risk-based meat inspection. An integrated system based on the use of key indicators defined for each inspection step with the setting of alarm thresholds could be implemented. Copyright © 2017 Elsevier Ltd. All rights reserved.
Publication bias and the failure of replication in experimental psychology.
Francis, Gregory
2012-12-01
Replication of empirical findings plays a fundamental role in science. Among experimental psychologists, successful replication enhances belief in a finding, while a failure to replicate is often interpreted to mean that one of the experiments is flawed. This view is wrong. Because experimental psychology uses statistics, empirical findings should appear with predictable probabilities. In a misguided effort to demonstrate successful replication of empirical findings and avoid failures to replicate, experimental psychologists sometimes report too many positive results. Rather than strengthen confidence in an effect, too much successful replication actually indicates publication bias, which invalidates entire sets of experimental findings. Researchers cannot judge the validity of a set of biased experiments because the experiment set may consist entirely of type I errors. This article shows how an investigation of the effect sizes from reported experiments can test for publication bias by looking for too much successful replication. Simulated experiments demonstrate that the publication bias test is able to discriminate biased experiment sets from unbiased experiment sets, but it is conservative about reporting bias. The test is then applied to several studies of prominent phenomena that highlight how publication bias contaminates some findings in experimental psychology. Additional simulated experiments demonstrate that using Bayesian methods of data analysis can reduce (and in some cases, eliminate) the occurrence of publication bias. Such methods should be part of a systematic process to remove publication bias from experimental psychology and reinstate the important role of replication as a final arbiter of scientific findings.
Gao, Lingyun; Zhao, Shuang; Jiang, Wei; Huang, Yuan; Bie, Zhilong
2014-01-01
Watermelon is one of the major Cucurbitaceae crops and the recent availability of genome sequence greatly facilitates the fundamental researches on it. Quantitative real-time reverse transcriptase PCR (qRT–PCR) is the preferred method for gene expression analyses, and using validated reference genes for normalization is crucial to ensure the accuracy of this method. However, a systematic validation of reference genes has not been conducted on watermelon. In this study, transcripts of 15 candidate reference genes were quantified in watermelon using qRT–PCR, and the stability of these genes was compared using geNorm and NormFinder. geNorm identified ClTUA and ClACT, ClEF1α and ClACT, and ClCAC and ClTUA as the best pairs of reference genes in watermelon organs and tissues under normal growth conditions, abiotic stress, and biotic stress, respectively. NormFinder identified ClYLS8, ClUBCP, and ClCAC as the best single reference genes under the above experimental conditions, respectively. ClYLS8 and ClPP2A were identified as the best reference genes across all samples. Two to nine reference genes were required for more reliable normalization depending on the experimental conditions. The widely used watermelon reference gene 18SrRNA was less stable than the other reference genes under the experimental conditions. Catalase family genes were identified in watermelon genome, and used to validate the reliability of the identified reference genes. ClCAT1and ClCAT2 were induced and upregulated in the first 24 h, whereas ClCAT3 was downregulated in the leaves under low temperature stress. However, the expression levels of these genes were significantly overestimated and misinterpreted when 18SrRNA was used as a reference gene. These results provide a good starting point for reference gene selection in qRT–PCR analyses involving watermelon. PMID:24587403
Generation, Analysis and Characterization of Anisotropic Engineered Meta Materials
NASA Astrophysics Data System (ADS)
Trifale, Ninad T.
A methodology for a systematic generation of highly anisotropic micro-lattice structures was investigated. Multiple algorithms for generation and validation of engineered structures are developed and evaluated. Set of all possible permutations of structures for an 8-node cubic unit cell were considered and the degree of anisotropy of meta-properties in heat transport and mechanical elasticity were evaluated. Feasibility checks were performed to ensure that the generated unit cell network was repeatable and a continuous lattice structure. Four different strategies for generating permutations of the structures are discussed. Analytical models were developed to predict effective thermal, mechanical and permeability characteristics of these cellular structures.Experimentation and numerical modeling techniques were used to validate the models that are developed. A self-consistent mechanical elasticity model was developed which connects the meso-scale properties to stiffness of individual struts. A three dimensional thermal resistance network analogy was used to evaluate the effective thermal conductivity of the structures. The struts were modeled as a network of one dimensional thermal resistive elements and effective conductivity evaluated. Models were validated against numerical simulations and experimental measurements on 3D printed samples. Model was developed to predict effective permeability of these engineered structures based on Darcy's law. Drag coefficients were evaluated for individual connections in transverse and longitudinal directions and an interaction term was calibrated from the experimental data in literature in order to predict permeability. Generic optimization framework coupled to finite element solver is developed for analyzing any application involving use of porous structures. An objective functions were generated structure to address frequently observed trade-off between the stiffness, thermal conductivity, permeability and porosity. Three application were analyzed for potential use of engineered materials. Heat spreader application involving thermal and mechanical constraints, artificial bone grafts application involving mechanical and permeability constraints and structural materials applications involving mechanical, thermal and porosity constraints is analyzed. Recommendations for optimum topologies for specific operating conditions are provided.
Kong, Qiusheng; Yuan, Jingxian; Gao, Lingyun; Zhao, Shuang; Jiang, Wei; Huang, Yuan; Bie, Zhilong
2014-01-01
Watermelon is one of the major Cucurbitaceae crops and the recent availability of genome sequence greatly facilitates the fundamental researches on it. Quantitative real-time reverse transcriptase PCR (qRT-PCR) is the preferred method for gene expression analyses, and using validated reference genes for normalization is crucial to ensure the accuracy of this method. However, a systematic validation of reference genes has not been conducted on watermelon. In this study, transcripts of 15 candidate reference genes were quantified in watermelon using qRT-PCR, and the stability of these genes was compared using geNorm and NormFinder. geNorm identified ClTUA and ClACT, ClEF1α and ClACT, and ClCAC and ClTUA as the best pairs of reference genes in watermelon organs and tissues under normal growth conditions, abiotic stress, and biotic stress, respectively. NormFinder identified ClYLS8, ClUBCP, and ClCAC as the best single reference genes under the above experimental conditions, respectively. ClYLS8 and ClPP2A were identified as the best reference genes across all samples. Two to nine reference genes were required for more reliable normalization depending on the experimental conditions. The widely used watermelon reference gene 18SrRNA was less stable than the other reference genes under the experimental conditions. Catalase family genes were identified in watermelon genome, and used to validate the reliability of the identified reference genes. ClCAT1and ClCAT2 were induced and upregulated in the first 24 h, whereas ClCAT3 was downregulated in the leaves under low temperature stress. However, the expression levels of these genes were significantly overestimated and misinterpreted when 18SrRNA was used as a reference gene. These results provide a good starting point for reference gene selection in qRT-PCR analyses involving watermelon.
Helmerhorst, Hendrik J F; Brage, Søren; Warren, Janet; Besson, Herve; Ekelund, Ulf
2012-08-31
Physical inactivity is one of the four leading risk factors for global mortality. Accurate measurement of physical activity (PA) and in particular by physical activity questionnaires (PAQs) remains a challenge. The aim of this paper is to provide an updated systematic review of the reliability and validity characteristics of existing and more recently developed PAQs and to quantitatively compare the performance between existing and newly developed PAQs.A literature search of electronic databases was performed for studies assessing reliability and validity data of PAQs using an objective criterion measurement of PA between January 1997 and December 2011. Articles meeting the inclusion criteria were screened and data were extracted to provide a systematic overview of measurement properties. Due to differences in reported outcomes and criterion methods a quantitative meta-analysis was not possible.In total, 31 studies testing 34 newly developed PAQs, and 65 studies examining 96 existing PAQs were included. Very few PAQs showed good results on both reliability and validity. Median reliability correlation coefficients were 0.62-0.71 for existing, and 0.74-0.76 for new PAQs. Median validity coefficients ranged from 0.30-0.39 for existing, and from 0.25-0.41 for new PAQs.Although the majority of PAQs appear to have acceptable reliability, the validity is moderate at best. Newly developed PAQs do not appear to perform substantially better than existing PAQs in terms of reliability and validity. Future PAQ studies should include measures of absolute validity and the error structure of the instrument.
2012-01-01
Physical inactivity is one of the four leading risk factors for global mortality. Accurate measurement of physical activity (PA) and in particular by physical activity questionnaires (PAQs) remains a challenge. The aim of this paper is to provide an updated systematic review of the reliability and validity characteristics of existing and more recently developed PAQs and to quantitatively compare the performance between existing and newly developed PAQs. A literature search of electronic databases was performed for studies assessing reliability and validity data of PAQs using an objective criterion measurement of PA between January 1997 and December 2011. Articles meeting the inclusion criteria were screened and data were extracted to provide a systematic overview of measurement properties. Due to differences in reported outcomes and criterion methods a quantitative meta-analysis was not possible. In total, 31 studies testing 34 newly developed PAQs, and 65 studies examining 96 existing PAQs were included. Very few PAQs showed good results on both reliability and validity. Median reliability correlation coefficients were 0.62–0.71 for existing, and 0.74–0.76 for new PAQs. Median validity coefficients ranged from 0.30–0.39 for existing, and from 0.25–0.41 for new PAQs. Although the majority of PAQs appear to have acceptable reliability, the validity is moderate at best. Newly developed PAQs do not appear to perform substantially better than existing PAQs in terms of reliability and validity. Future PAQ studies should include measures of absolute validity and the error structure of the instrument. PMID:22938557
Ionization chamber correction factors for MR-linacs
NASA Astrophysics Data System (ADS)
Pojtinger, Stefan; Steffen Dohm, Oliver; Kapsch, Ralf-Peter; Thorwarth, Daniela
2018-06-01
Previously, readings of air-filled ionization chambers have been described as being influenced by magnetic fields. To use these chambers for dosimetry in magnetic resonance guided radiotherapy (MRgRT), this effect must be taken into account by introducing a correction factor k B. The purpose of this study is to systematically investigate k B for a typical reference setup for commercially available ionization chambers with different magnetic field strengths. The Monte Carlo simulation tool EGSnrc was used to simulate eight commercially available ionization chambers in magnetic fields whose magnetic flux density was in the range of 0–2.5 T. To validate the simulation, the influence of the magnetic field was experimentally determined for a PTW30013 Farmer-type chamber for magnetic flux densities between 0 and 1.425 T. Changes in the detector response of up to 8% depending on the magnetic flux density, on the chamber geometry and on the chamber orientation were obtained. In the experimental setup, a maximum deviation of less than 2% was observed when comparing measured values with simulated values. Dedicated values for two MR-linac systems (ViewRay MRIdian, ViewRay Inc, Cleveland, United States, 0.35 T/ 6 MV and Elekta Unity, Elekta AB, Stockholm, Sweden, 1.5 T/7 MV) were determined for future use in reference dosimetry. Simulated values for thimble-type chambers are in good agreement with experiments as well as with the results of previous publications. After further experimental validation, the results can be considered for definition of standard protocols for purposes of reference dosimetry in MRgRT.
Ionization chamber correction factors for MR-linacs.
Pojtinger, Stefan; Dohm, Oliver Steffen; Kapsch, Ralf-Peter; Thorwarth, Daniela
2018-06-07
Previously, readings of air-filled ionization chambers have been described as being influenced by magnetic fields. To use these chambers for dosimetry in magnetic resonance guided radiotherapy (MRgRT), this effect must be taken into account by introducing a correction factor k B . The purpose of this study is to systematically investigate k B for a typical reference setup for commercially available ionization chambers with different magnetic field strengths. The Monte Carlo simulation tool EGSnrc was used to simulate eight commercially available ionization chambers in magnetic fields whose magnetic flux density was in the range of 0-2.5 T. To validate the simulation, the influence of the magnetic field was experimentally determined for a PTW30013 Farmer-type chamber for magnetic flux densities between 0 and 1.425 T. Changes in the detector response of up to 8% depending on the magnetic flux density, on the chamber geometry and on the chamber orientation were obtained. In the experimental setup, a maximum deviation of less than 2% was observed when comparing measured values with simulated values. Dedicated values for two MR-linac systems (ViewRay MRIdian, ViewRay Inc, Cleveland, United States, 0.35 T/ 6 MV and Elekta Unity, Elekta AB, Stockholm, Sweden, 1.5 T/7 MV) were determined for future use in reference dosimetry. Simulated values for thimble-type chambers are in good agreement with experiments as well as with the results of previous publications. After further experimental validation, the results can be considered for definition of standard protocols for purposes of reference dosimetry in MRgRT.
The free jet as a simulator of forward velocity effects on jet noise
NASA Technical Reports Server (NTRS)
Ahuja, K. K.; Tester, B. J.; Tanna, H. K.
1978-01-01
A thorough theoretical and experimental study of the effects of the free-jet shear layer on the transmission of sound from a model jet placed within the free jet to the far-field receiver located outside the free-jet flow was conducted. The validity and accuracy of the free-jet flight simulation technique for forward velocity effects on jet noise was evaluated. Transformation charts and a systematic computational procedure for converting measurements from a free-jet simulation to the corresponding results from a wind-tunnel simulation, and, finally, to the flight case were provided. The effects of simulated forward flight on jet mixing noise, internal noise and shock-associated noise from model-scale unheated and heated jets were established experimentally in a free-jet facility. It was illustrated that the existing anomalies between full-scale flight data and model-scale flight simulation data projected to the flight case, could well be due to the contamination of flight data by engine internal noise.
Half-lives of α -decaying nuclei in the medium-mass region within the transfer matrix method
NASA Astrophysics Data System (ADS)
Wu, Shuangxiang; Qian, Yibin; Ren, Zhongzhou
2018-05-01
The α -decay half-lives of even-even nuclei from Sm to Th are systematically studied based on the transfer matrix method. For the nuclear potential, a type of cosh-parametrized form is applied to calculate the penetration probability. Through a least-squares fit to experimental half-lives, we optimize the parameters in the potential and the α preformation factor P0. During this process, P0 is treated as a constant for each parent nucleus. Eventually, the calculated half-lives are found to agree well with the experimental data, which verifies the accuracy of the present approach. Furthermore, in recent studies, P0 is regulated by the shell and paring effects plus the nuclear deformation. To this end, P0 is here associated with the structural quantity, i.e., the microscopic correction of nuclear mass (Emic). In this way, the agreement between theory and experiment is greatly improved by more than 20%, validating the appropriate treatment of P0 in the scheme of Emic.
Ogurtsova, Katherine; Heise, Thomas L; Linnenkamp, Ute; Dintsios, Charalabos-Markos; Lhachimi, Stefan K; Icks, Andrea
2017-12-29
Type 2 diabetes mellitus (T2DM), a highly prevalent chronic disease, puts a large burden on individual health and health care systems. Computer simulation models, used to evaluate the clinical and economic effectiveness of various interventions to handle T2DM, have become a well-established tool in diabetes research. Despite the broad consensus about the general importance of validation, especially external validation, as a crucial instrument of assessing and controlling for the quality of these models, there are no systematic reviews comparing such validation of diabetes models. As a result, the main objectives of this systematic review are to identify and appraise the different approaches used for the external validation of existing models covering the development and progression of T2DM. We will perform adapted searches by applying respective search strategies to identify suitable studies from 14 electronic databases. Retrieved study records will be included or excluded based on predefined eligibility criteria as defined in this protocol. Among others, a publication filter will exclude studies published before 1995. We will run abstract and full text screenings and then extract data from all selected studies by filling in a predefined data extraction spreadsheet. We will undertake a descriptive, narrative synthesis of findings to address the study objectives. We will pay special attention to aspects of quality of these models in regard to the external validation based upon ISPOR and ADA recommendations as well as Mount Hood Challenge reports. All critical stages within the screening, data extraction and synthesis processes will be conducted by at least two authors. This protocol adheres to PRISMA and PRISMA-P standards. The proposed systematic review will provide a broad overview of the current practice in the external validation of models with respect to T2DM incidence and progression in humans built on simulation techniques. PROSPERO CRD42017069983 .
Julien, Maxime; Gilbert, Alexis; Yamada, Keita; Robins, Richard J; Höhener, Patrick; Yoshida, Naohiro; Remaud, Gérald S
2018-01-01
The enrichment factor (ε) is a common way to express Isotope Effects (IEs) associated with a phenomenon. Many studies determine ε using a Rayleigh-plot, which needs multiple data points. More recent articles describe an alternative method using the Rayleigh equation that allows the determination of ε using only one experimental point, but this method is often subject to controversy. However, a calculation method using two points (one experimental point and one at t 0 ) should lead to the same results because the calculation is derived from the Rayleigh equation. But, it is frequently asked "what is the valid domain of use of this two point calculation?" The primary aim of the present work is a systematic comparison of results obtained with these two methodologies and the determination of the conditions required for the valid calculation of ε. In order to evaluate the efficiency of the two approaches, the expanded uncertainty (U) associated with determining ε has been calculated using experimental data from three published articles. The second objective of the present work is to describe how to determine the expanded uncertainty (U) associated with determining ε. Comparative methodologies using both Rayleigh-plot and two point calculation are detailed and it is clearly demonstrated that calculation of ε using a single data point can give the same result as a Rayleigh-plot provided one strict condition is respected: that the experimental value is measured at a small fraction of unreacted substrate (f < 30%). This study will help stable isotope users to present their results in a more rigorous expression: ε ± U and therefore to define better the significance of an experimental results prior interpretation. Capsule: Enrichment factor can be determined through two different methods and the calculation of associated expanded uncertainty allows checking its significance. Copyright © 2017 Elsevier B.V. All rights reserved.
Global Precipitation Measurement (GPM) Ground Validation: Plans and Preparations
NASA Technical Reports Server (NTRS)
Schwaller, M.; Bidwell, S.; Durning, F. J.; Smith, E.
2004-01-01
The Global Precipitation Measurement (GPM) program is an international partnership led by the National Aeronautics and Space Administration (NASA) and the Japan Aerospace Exploration Agency (JAXA). GPM will improve climate, weather, and hydro-meteorological forecasts through more frequent and more accurate measurement of precipitation across the globe. This paper describes the concept, the planning, and the preparations for Ground Validation within the GPM program. Ground Validation (GV) plays an important role in the program by investigating and quantitatively assessing the errors within the satellite retrievals. These quantitative estimates of retrieval errors will assist the scientific community by bounding the errors within their research products. The two fundamental requirements of the GPM Ground Validation program are: (1) error characterization of the precipitation retrievals and (2) continual improvement of the satellite retrieval algorithms. These two driving requirements determine the measurements, instrumentation, and location for ground observations. This paper outlines GV plans for estimating the systematic and random components of retrieval error and for characterizing the spatial p d temporal structure of the error and plans for algorithm improvement in which error models are developed and experimentally explored to uncover the physical causes of errors within the retrievals. This paper discusses NASA locations for GV measurements as well as anticipated locations from international GPM partners. NASA's primary locations for validation measurements are an oceanic site at Kwajalein Atoll in the Republic of the Marshall Islands and a continental site in north-central Oklahoma at the U.S. Department of Energy's Atmospheric Radiation Measurement Program site.
van der Meer, Suzan; Trippolini, Maurizio A; van der Palen, Job; Verhoeven, Jan; Reneman, Michiel F
2013-12-01
Systematic review. To evaluate the validity of instruments that claim to detect submaximal capacity when maximal capacity is requested in patients with chronic nonspecific musculoskeletal pain. Several instruments have been developed to measure capacity in patients with chronic pain. The detection of submaximal capacity can have major implications for patients. The validity of these instruments has never been systematically reviewed. A systematic literature search was performed including the following databases: Web of Knowledge (including PubMed and Cinahl), Scopus, and Cochrane. Two reviewers independently selected the articles based on the title and abstract according to the study selection criteria. Studies were included when they contained original data and when they objectified submaximal physical or functional capacity when maximal physical or functional capacity was requested. Two authors independently extracted data and rated the quality of the articles. The included studies were scored according to the subscales "Criterion Validity" and "Hypothesis Testing" of the COSMIN checklist. A Best Evidence Synthesis was performed. Seven studies were included, 5 of which used a reference standard for submaximal capacity. Three studies were of good methodological quality and validly detected submaximal capacity with specificity rates between 75% and 100%. There is strong evidence that submaximal capacity can be detected in patients with chronic low back pain with a lumbar motion monitor or visual observations accompanying a functional capacity evaluation lifting test.
Measurement properties of depression questionnaires in patients with diabetes: a systematic review.
van Dijk, Susan E M; Adriaanse, Marcel C; van der Zwaan, Lennart; Bosmans, Judith E; van Marwijk, Harm W J; van Tulder, Maurits W; Terwee, Caroline B
2018-06-01
To conduct a systematic review on measurement properties of questionnaires measuring depressive symptoms in adult patients with type 1 or type 2 diabetes. A systematic review of the literature in MEDLINE, EMbase and PsycINFO was performed. Full text, original articles, published in any language up to October 2016 were included. Eligibility for inclusion was independently assessed by three reviewers who worked in pairs. Methodological quality of the studies was evaluated by two independent reviewers using the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist. Quality of the questionnaires was rated per measurement property, based on the number and quality of the included studies and the reported results. Of 6286 unique hits, 21 studies met our criteria evaluating nine different questionnaires in multiple settings and languages. The methodological quality of the included studies was variable for the different measurement properties: 9/15 studies scored 'good' or 'excellent' on internal consistency, 2/5 on reliability, 0/1 on content validity, 10/10 on structural validity, 8/11 on hypothesis testing, 1/5 on cross-cultural validity, and 4/9 on criterion validity. For the CES-D, there was strong evidence for good internal consistency, structural validity, and construct validity; moderate evidence for good criterion validity; and limited evidence for good cross-cultural validity. The PHQ-9 and WHO-5 also performed well on several measurement properties. However, the evidence for structural validity of the PHQ-9 was inconclusive. The WHO-5 was less extensively researched and originally not developed to measure depression. Currently, the CES-D is best supported for measuring depressive symptoms in diabetes patients.
Gilheaney, Ó; Kerr, P; Béchet, S; Walshe, M
2016-12-01
To determine the effectiveness of endoscopic cricopharyngeal myotomy on upper oesophageal sphincter dysfunction in adults with upper oesophageal sphincter dysfunction and neurological disease. Published and unpublished studies with a quasi-experimental design investigating endoscopic cricopharyngeal myotomy effects on upper oesophageal sphincter dysfunction in humans were considered eligible. Electronic databases, grey literature and reference lists of included studies were systematically searched. Data were extracted by two independent reviewers. Methodological quality was assessed independently using the PEDro scale and MINORS tool. Of 2938 records identified, 2 studies were eligible. Risk of bias assessment indicated areas of methodological concern in the literature. Statistical analysis was not possible because of the limited number of eligible studies. No determinations could be made regarding endoscopic cricopharyngeal myotomy effectiveness in the cohort of interest. Reliable and valid evidence on the following is required to support increasing clinical usage of endoscopic cricopharyngeal myotomy: optimal candidacy selection; standardised post-operative management protocol; complications; and endoscopic cricopharyngeal myotomy effects on aspiration of food and laryngeal penetration, mean upper oesophageal sphincter resting pressure and quality of life.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarazona, David; Berz, Martin; Hipple, Robert
The main goal of the Muon g-2 Experiment (g-2) at Fermilab is to measure the muon anomalous magnetic moment to unprecedented precision. This new measurement will allow to test the completeness of the Standard Model (SM) and to validate other theoretical models beyond the SM. The close interplay of the understanding of particle beam dynamics and the preparation of the beam properties with the experimental measurement is tantamount to the reduction of systematic errors in the determination of the muon anomalous magnetic moment. We describe progress in developing detailed calculations and modeling of the muon beam delivery system in ordermore » to obtain a better understanding of spin-orbit correlations, nonlinearities, and more realistic aspects that contribute to the systematic errors of the g-2 measurement. Our simulation is meant to provide statistical studies of error effects and quick analyses of running conditions for when g-2 is taking beam, among others. We are using COSY, a differential algebra solver developed at Michigan State University that will also serve as an alternative to compare results obtained by other simulation teams of the g-2 Collaboration.« less
Measuring Resource Utilization: A Systematic Review of Validated Self-Reported Questionnaires.
Leggett, Laura E; Khadaroo, Rachel G; Holroyd-Leduc, Jayna; Lorenzetti, Diane L; Hanson, Heather; Wagg, Adrian; Padwal, Raj; Clement, Fiona
2016-03-01
A variety of methods may be used to obtain costing data. Although administrative data are most commonly used, the data available in these datasets are often limited. An alternative method of obtaining costing is through self-reported questionnaires. Currently, there are no systematic reviews that summarize self-reported resource utilization instruments from the published literature.The aim of the study was to identify validated self-report healthcare resource use instruments and to map their attributes.A systematic review was conducted. The search identified articles using terms like "healthcare utilization" and "questionnaire." All abstracts and full texts were considered in duplicate. For inclusion, studies had to assess the validity of a self-reported resource use questionnaire, to report original data, include adult populations, and the questionnaire had to be publically available. Data such as type of resource utilization assessed by each questionnaire, and validation findings were extracted from each study.In all, 2343 unique citations were retrieved; 2297 were excluded during abstract review. Forty-six studies were reviewed in full text, and 15 studies were included in this systematic review. Six assessed resource utilization of patients with chronic conditions; 5 assessed mental health service utilization; 3 assessed resource utilization by a general population; and 1 assessed utilization in older populations. The most frequently measured resources included visits to general practitioners and inpatient stays; nonmedical resources were least frequently measured. Self-reported questionnaires on resource utilization had good agreement with administrative data, although, visits to general practitioners, outpatient days, and nurse visits had poorer agreement.Self-reported questionnaires are a valid method of collecting data on healthcare resource utilization.
Effects of space environment on composites: An analytical study of critical experimental parameters
NASA Technical Reports Server (NTRS)
Gupta, A.; Carroll, W. F.; Moacanin, J.
1979-01-01
A generalized methodology currently employed at JPL, was used to develop an analytical model for effects of high-energy electrons and interactions between electron and ultraviolet effects. Chemical kinetic concepts were applied in defining quantifiable parameters; the need for determining short-lived transient species and their concentration was demonstrated. The results demonstrates a systematic and cost-effective means of addressing the issues and show qualitative and quantitative, applicable relationships between space radiation and simulation parameters. An equally important result is identification of critical initial experiments necessary to further clarify the relationships. Topics discussed include facility and test design; rastered vs. diffuse continuous e-beam; valid acceleration level; simultaneous vs. sequential exposure to different types of radiation; and interruption of test continuity.
NASA Technical Reports Server (NTRS)
Blad, B. L.; Norman, J. M.; Gardner, B. R.
1983-01-01
The experimental design, data acquisition and analysis procedures for agronomic and reflectance data acquired over corn and soybeans at the Sandhills Agricultural Laboratory of the University of Nebraska are described. The following conclusions were reached: (1) predictive leaf area estimation models can be defined which appear valid over a wide range of soils; (2) relative grain yield estimates over moisture stressed corn were improved by combining reflectance and thermal data; (3) corn phenology estimates using the model of Badhwar and Henderson (1981) exhibited systematic bias but were reasonably accurate; (4) canopy reflectance can be modelled to within approximately 10% of measured values; and (5) soybean pubescence significantly affects canopy reflectance, energy balance and water use relationships.
Impedance of curved rectangular spiral coils around a conductive cylinder
NASA Astrophysics Data System (ADS)
Burke, S. K.; Ditchburn, R. J.; Theodoulidis, T. P.
2008-07-01
Eddy-current induction due to a thin conformable coil wrapped around a long conductive cylinder is examined using a second-order vector potential formalism. Compact closed-form expressions are derived for the self- and mutual impedances of curved rectangular spiral coils (i) in free space and (ii) when wrapped around the surface of the cylindrical rod. The validity of these expressions was tested against the results of a systematic series of experiments using a cylindrical Al-alloy rod and conformable coils manufactured using flexible printed-circuit-board technology. The theoretical expressions were in very good agreement with the experimental measurements. The significance of the results for eddy-current nondestructive inspection using flexible coils and flexible coil arrays is discussed.
Instruments Measuring Integrated Care: A Systematic Review of Measurement Properties.
Bautista, Mary Ann C; Nurjono, Milawaty; Lim, Yee Wei; Dessers, Ezra; Vrijhoef, Hubertus Jm
2016-12-01
Policy Points: Investigations on systematic methodologies for measuring integrated care should coincide with the growing interest in this field of research. A systematic review of instruments provides insights into integrated care measurement, including setting the research agenda for validating available instruments and informing the decision to develop new ones. This study is the first systematic review of instruments measuring integrated care with an evidence synthesis of the measurement properties. We found 209 index instruments measuring different constructs related to integrated care; the strength of evidence on the adequacy of the majority of their measurement properties remained largely unassessed. Integrated care is an important strategy for increasing health system performance. Despite its growing significance, detailed evidence on the measurement properties of integrated care instruments remains vague and limited. Our systematic review aims to provide evidence on the state of the art in measuring integrated care. Our comprehensive systematic review framework builds on the Rainbow Model for Integrated Care (RMIC). We searched MEDLINE/PubMed for published articles on the measurement properties of instruments measuring integrated care and identified eligible articles using a standard set of selection criteria. We assessed the methodological quality of every validation study reported using the COSMIN checklist and extracted data on study and instrument characteristics. We also evaluated the measurement properties of each examined instrument per validation study and provided a best evidence synthesis on the adequacy of measurement properties of the index instruments. From the 300 eligible articles, we assessed the methodological quality of 379 validation studies from which we identified 209 index instruments measuring integrated care constructs. The majority of studies reported on instruments measuring constructs related to care integration (33%) and patient-centered care (49%); fewer studies measured care continuity/comprehensive care (15%) and care coordination/case management (3%). We mapped 84% of the measured constructs to the clinical integration domain of the RMIC, with fewer constructs related to the domains of professional (3.7%), organizational (3.4%), and functional (0.5%) integration. Only 8% of the instruments were mapped to a combination of domains; none were mapped exclusively to the system or normative integration domains. The majority of instruments were administered to either patients (60%) or health care providers (20%). Of the measurement properties, responsiveness (4%), measurement error (7%), and criterion (12%) and cross-cultural validity (14%) were less commonly reported. We found <50% of the validation studies to be of good or excellent quality for any of the measurement properties. Only a minority of index instruments showed strong evidence of positive findings for internal consistency (15%), content validity (19%), and structural validity (7%); with moderate evidence of positive findings for internal consistency (14%) and construct validity (14%). Our results suggest that the quality of measurement properties of instruments measuring integrated care is in need of improvement with the less-studied constructs and domains to become part of newly developed instruments. © 2016 Milbank Memorial Fund.
Instruments Measuring Integrated Care: A Systematic Review of Measurement Properties
BAUTISTA, MARY ANN C.; NURJONO, MILAWATY; DESSERS, EZRA; VRIJHOEF, HUBERTUS JM
2016-01-01
Policy Points: Investigations on systematic methodologies for measuring integrated care should coincide with the growing interest in this field of research.A systematic review of instruments provides insights into integrated care measurement, including setting the research agenda for validating available instruments and informing the decision to develop new ones.This study is the first systematic review of instruments measuring integrated care with an evidence synthesis of the measurement properties.We found 209 index instruments measuring different constructs related to integrated care; the strength of evidence on the adequacy of the majority of their measurement properties remained largely unassessed. Context Integrated care is an important strategy for increasing health system performance. Despite its growing significance, detailed evidence on the measurement properties of integrated care instruments remains vague and limited. Our systematic review aims to provide evidence on the state of the art in measuring integrated care. Methods Our comprehensive systematic review framework builds on the Rainbow Model for Integrated Care (RMIC). We searched MEDLINE/PubMed for published articles on the measurement properties of instruments measuring integrated care and identified eligible articles using a standard set of selection criteria. We assessed the methodological quality of every validation study reported using the COSMIN checklist and extracted data on study and instrument characteristics. We also evaluated the measurement properties of each examined instrument per validation study and provided a best evidence synthesis on the adequacy of measurement properties of the index instruments. Findings From the 300 eligible articles, we assessed the methodological quality of 379 validation studies from which we identified 209 index instruments measuring integrated care constructs. The majority of studies reported on instruments measuring constructs related to care integration (33%) and patient‐centered care (49%); fewer studies measured care continuity/comprehensive care (15%) and care coordination/case management (3%). We mapped 84% of the measured constructs to the clinical integration domain of the RMIC, with fewer constructs related to the domains of professional (3.7%), organizational (3.4%), and functional (0.5%) integration. Only 8% of the instruments were mapped to a combination of domains; none were mapped exclusively to the system or normative integration domains. The majority of instruments were administered to either patients (60%) or health care providers (20%). Of the measurement properties, responsiveness (4%), measurement error (7%), and criterion (12%) and cross‐cultural validity (14%) were less commonly reported. We found <50% of the validation studies to be of good or excellent quality for any of the measurement properties. Only a minority of index instruments showed strong evidence of positive findings for internal consistency (15%), content validity (19%), and structural validity (7%); with moderate evidence of positive findings for internal consistency (14%) and construct validity (14%). Conclusions Our results suggest that the quality of measurement properties of instruments measuring integrated care is in need of improvement with the less‐studied constructs and domains to become part of newly developed instruments. PMID:27995711
Towards reproducible experimental studies for non-convex polyhedral shaped particles
NASA Astrophysics Data System (ADS)
Wilke, Daniel N.; Pizette, Patrick; Govender, Nicolin; Abriak, Nor-Edine
2017-06-01
The packing density and flat bottomed hopper discharge of non-convex polyhedral particles are investigated in a systematic experimental study. The motivation for this study is two-fold. Firstly, to establish an approach to deliver quality experimental particle packing data for non-convex polyhedral particles that can be used for characterization and validation purposes of discrete element codes. Secondly, to make the reproducibility of experimental setups as convenient and readily available as possible using affordable and accessible technology. The primary technology for this study is fused deposition modeling used to 3D print polylactic acid (PLA) particles using readily available 3D printer technology. A total of 8000 biodegradable particles were printed, 1000 white particles and 1000 black particles for each of the four particle types considered in this study. Reproducibility is one benefit of using fused deposition modeling to print particles, but an extremely important additional benefit is that specific particle properties can be explicitly controlled. As an example in this study the volume fraction of each particle can be controlled i.e. the effective particle density can be adjusted. In this study the particle volumes reduces drastically as the non-convexity is increased, however all printed white particles in this study have the same mass within 2% of each other.
McCambridge, Jim; Butor-Bhavsar, Kaanan; Witton, John; Elbourne, Diana
2011-01-01
The possible effects of research assessments on participant behaviour have attracted research interest, especially in studies with behavioural interventions and/or outcomes. Assessments may introduce bias in randomised controlled trials by altering receptivity to intervention in experimental groups and differentially impacting on the behaviour of control groups. In a Solomon 4-group design, participants are randomly allocated to one of four arms: (1) assessed experimental group; (2) unassessed experimental group (3) assessed control group; or (4) unassessed control group. This design provides a test of the internal validity of effect sizes obtained in conventional two-group trials by controlling for the effects of baseline assessment, and assessing interactions between the intervention and baseline assessment. The aim of this systematic review is to evaluate evidence from Solomon 4-group studies with behavioural outcomes that baseline research assessments themselves can introduce bias into trials. Electronic databases were searched, supplemented by citation searching. Studies were eligible if they reported appropriately analysed results in peer-reviewed journals and used Solomon 4-group designs in non-laboratory settings with behavioural outcome measures and sample sizes of 20 per group or greater. Ten studies from a range of applied areas were included. There was inconsistent evidence of main effects of assessment, sparse evidence of interactions with behavioural interventions, and a lack of convincing data in relation to the research question for this review. There were too few high quality completed studies to infer conclusively that biases stemming from baseline research assessments do or do not exist. There is, therefore a need for new rigorous Solomon 4-group studies that are purposively designed to evaluate the potential for research assessments to cause bias in behaviour change trials.
Shivakumar, H N; Desai, B G; Pandya, Saumyak; Karki, S S
2007-01-01
Glipizide was complexed with beta-cyclodextrin in an attempt to enhance the drug solubility. The phase solubility diagram was classified as A(L) type, which was characterized by an apparent 1:1 stability constant that had a value of 413.82 M(-1). Fourier transform infrared spectrophotometry, differential scanning calorimetry, powder x-ray diffractometry and proton nuclear magnetic resonance spectral analysis indicated considerable interaction between the drug and beta-cyclodextrin. A 2(3) factorial design was employed to prepare hydroxypropyl methylcellulose (HPMC) matrix tablets containing the drug or its complex. The effect of the total polymer loads (X1), levels of HPMC K100LV (X9), and complexation (X3) on release at first hour (Y1), 24 h (Y2), time taken for 50% release (Y3), and diffusion exponent (Y4) was systematically analyzed using the F test. Mathematical models containing only the significant terms (P < 0.05) were generated for each parameter by multiple linear regression analysis and analysis of variance. Complexation was found to exert a significant effect on Y1, Y2, and Y3, whereas total polymer loads significantly influenced all the responses. The models generated were validated by developing two new formulations with a combination of factors within the experimental domain. The experimental values of the response parameters were in close agreement with the predicted values, thereby proving-the validity of the generated mathematical models.
Montedori, Alessandro; Abraha, Iosief; Chiatti, Carlos; Cozzolino, Francesco; Orso, Massimiliano; Luchetta, Maria Laura; Rimland, Joseph M; Ambrosio, Giuseppe
2016-09-15
Administrative healthcare databases are useful to investigate the epidemiology, health outcomes, quality indicators and healthcare utilisation concerning peptic ulcers and gastrointestinal bleeding, but the databases need to be validated in order to be a reliable source for research. The aim of this protocol is to perform the first systematic review of studies reporting the validation of International Classification of Diseases, 9th Revision and 10th version (ICD-9 and ICD-10) codes for peptic ulcer and upper gastrointestinal bleeding diagnoses. MEDLINE, EMBASE, Web of Science and the Cochrane Library databases will be searched, using appropriate search strategies. We will include validation studies that used administrative data to identify peptic ulcer disease and upper gastrointestinal bleeding diagnoses or studies that evaluated the validity of peptic ulcer and upper gastrointestinal bleeding codes in administrative data. The following inclusion criteria will be used: (a) the presence of a reference standard case definition for the diseases of interest; (b) the presence of at least one test measure (eg, sensitivity, etc) and (c) the use of an administrative database as a source of data. Pairs of reviewers will independently abstract data using standardised forms and will evaluate quality using the checklist of the Standards for Reporting of Diagnostic Accuracy (STARD) criteria. This systematic review protocol has been produced in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocol (PRISMA-P) 2015 statement. Ethics approval is not required given that this is a protocol for a systematic review. We will submit results of this study to a peer-reviewed journal for publication. The results will serve as a guide for researchers validating administrative healthcare databases to determine appropriate case definitions for peptic ulcer disease and upper gastrointestinal bleeding, as well as to perform outcome research using administrative healthcare databases of these conditions. CRD42015029216. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Rimland, Joseph M; Abraha, Iosief; Luchetta, Maria Laura; Cozzolino, Francesco; Orso, Massimiliano; Cherubini, Antonio; Dell'Aquila, Giuseppina; Chiatti, Carlos; Ambrosio, Giuseppe; Montedori, Alessandro
2016-06-01
Healthcare databases are useful sources to investigate the epidemiology of chronic obstructive pulmonary disease (COPD), to assess longitudinal outcomes in patients with COPD, and to develop disease management strategies. However, in order to constitute a reliable source for research, healthcare databases need to be validated. The aim of this protocol is to perform the first systematic review of studies reporting the validation of codes related to COPD diagnoses in healthcare databases. MEDLINE, EMBASE, Web of Science and the Cochrane Library databases will be searched using appropriate search strategies. Studies that evaluated the validity of COPD codes (such as the International Classification of Diseases 9th Revision and 10th Revision system; the Real codes system or the International Classification of Primary Care) in healthcare databases will be included. Inclusion criteria will be: (1) the presence of a reference standard case definition for COPD; (2) the presence of at least one test measure (eg, sensitivity, positive predictive values, etc); and (3) the use of a healthcare database (including administrative claims databases, electronic healthcare databases or COPD registries) as a data source. Pairs of reviewers will independently abstract data using standardised forms and will assess quality using a checklist based on the Standards for Reporting of Diagnostic accuracy (STARD) criteria. This systematic review protocol has been produced in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocol (PRISMA-P) 2015 statement. Ethics approval is not required. Results of this study will be submitted to a peer-reviewed journal for publication. The results from this systematic review will be used for outcome research on COPD and will serve as a guide to identify appropriate case definitions of COPD, and reference standards, for researchers involved in validating healthcare databases. CRD42015029204. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Cheng, Ting-Yin; Tarng, Der-Cherng; Liao, Yuan-Mei; Lin, Pi-Chu
2017-02-01
To investigate the effectiveness of systematic nursing instruction on a low-phosphorus diet, serum phosphorus level and pruritus of haemodialysis patients. A high number of end-stage renal disease patients on haemodialysis are bothered by pruritus. Hyperphosphataemia was reported to be related to pruritus. An experimental design was applied. Ninety-four patients who received haemodialysis between September 2013 and December 2013 at a medical centre in Taipei, Taiwan, were recruited. An experimental group received individual systematic nursing instruction by the investigator through a nursing instruction pamphlet and reminder card for taking medication. A control group received traditional nursing instruction. The pruritus, blood phosphorus level and five-day diet records were evaluated before and after intervention. The experimental group had a low-phosphorus diet intake compared with the control group (p < 0·001). A significant difference in serum phosphorus level was observed between the experimental and control groups (p = 0·002). Incidence of pruritus was lower in the experimental group than in the control group (p < 0·001). A systematic nursing instruction included using a pamphlet, pictures and reminder cards, the patients' blood phosphorus levels decreased, the patients consumed more low-phosphorus food, and pruritus decreased. This study recommends that clinical nursing staff include systematic nursing instruction as a routine practice for dialysis patients. © 2016 John Wiley & Sons Ltd.
Validity of the MCAT in Predicting Performance in the First Two Years of Medical School.
ERIC Educational Resources Information Center
Jones, Robert F.; Thomae-Forgues, Maria
1984-01-01
The first systematic summary of predictive validity research on the new Medical College Admission Test (MCAT) is presented. The results show that MCAT scores have significant predictive validity with respect to first- and second-year medical school course grades. Further directions for MCAT validity research are described. (Author/MLW)
We introduce and validate a new precision oncology framework for the systematic prioritization of drugs targeting mechanistic tumor dependencies in individual patients. Compounds are prioritized on the basis of their ability to invert the concerted activity of master regulator proteins that mechanistically regulate tumor cell state, as assessed from systematic drug perturbation assays. We validated the approach on a cohort of 212 gastroenteropancreatic neuroendocrine tumors (GEP-NETs), a rare malignancy originating in the pancreas and gastrointestinal tract.
RBANS Validity Indices: a Systematic Review and Meta-Analysis.
Shura, Robert D; Brearly, Timothy W; Rowland, Jared A; Martindale, Sarah L; Miskey, Holly M; Duff, Kevin
2018-05-16
Neuropsychology practice organizations have highlighted the need for thorough evaluation of performance validity as part of the neuropsychological assessment process. Embedded validity indices are derived from existing measures and expand the scope of validity assessment. The Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) is a brief instrument that quickly allows a clinician to assess a variety of cognitive domains. The RBANS also contains multiple embedded validity indicators. The purpose of this study was to synthesize the utility of those indicators to assess performance validity. A systematic search was completed, resulting in 11 studies for synthesis and 10 for meta-analysis. Data were synthesized on four indices and three subtests across samples of civilians, service members, and veterans. Sufficient data for meta-analysis were only available for the Effort Index, and related analyses indicated optimal cutoff scores of ≥1 (AUC = .86) and ≥ 3 (AUC = .85). However, outliers and heterogeneity were present indicating the importance of age and evaluation context. Overall, embedded validity indicators have shown adequate diagnostic accuracy across a variety of populations. Recommendations for interpreting these measures and future studies are provided.
Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards
ERIC Educational Resources Information Center
Smith, Justin D.
2012-01-01
This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have…
ERIC Educational Resources Information Center
Kersten, Paula; Czuba, Karol; McPherson, Kathryn; Dudley, Margaret; Elder, Hinemoa; Tauroa, Robyn; Vandal, Alain
2016-01-01
This article synthesized evidence for the validity and reliability of the Strengths and Difficulties Questionnaire in children aged 3-5 years. A systematic review using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement guidelines was carried out. Study quality was rated using the Consensus-based Standards for the…
Tsang, B; Stothers, L; Macnab, A; Lazare, D; Nigro, M
2016-03-01
Validated questionnaires are increasingly the preferred method used to obtain historical information. Specialized questionnaires exist validated for patients with neurogenic disease including neurogenic bladder. Those currently available are systematically reviewed and their potential for clinical and research use are described. A systematic search via Medline and PubMed using the key terms questionnaire(s) crossed with Multiple Sclerosis (MS) and Spinal Cord Injury (SCI) for the years 1946 to January 22, 2014 inclusive. Additional articles were selected from review of references in the publications identified. Only peer reviewed articles published in English were included. 18 questionnaires exist validated for patients with neurogenic bladder; 14 related to MS, 3 for SCI, and 1 for neurogenic bladder in general; with 4 cross-validated in both MS and SCI. All 18 are validated for both male and female patients; 59% are available only in English. The domains of psychological impact and physical function are represented in 71% and 76% of questionnaires, respectively. None for the female population included elements to measure symptoms of prolapse. The last decade has seen an expansion of validated questionnaires to document bladder symptoms in neurogenic disease. Disease specific instruments are available for incorporation into the clinical setting for MS and SCI patients with neurogenic bladder. The availability of caregiver and interview options enhances suitability in clinical practice as they can be adapted to various extents of disability. Future developments should include expanded language validation to the top 10 global languages reported by the World Health Organization. © 2015 Wiley Periodicals, Inc.
Kivlan, Benjamin R; Martin, Robroy L
2012-08-01
The purpose of this study was to systematically review the literature for functional performance tests with evidence of reliability and validity that could be used for a young, athletic population with hip dysfunction. A search of PubMed and SPORTDiscus databases were performed to identify movement, balance, hop/jump, or agility functional performance tests from the current peer-reviewed literature used to assess function of the hip in young, athletic subjects. The single-leg stance, deep squat, single-leg squat, and star excursion balance tests (SEBT) demonstrated evidence of validity and normative data for score interpretation. The single-leg stance test and SEBT have evidence of validity with association to hip abductor function. The deep squat test demonstrated evidence as a functional performance test for evaluating femoroacetabular impingement. Hop/Jump tests and agility tests have no reported evidence of reliability or validity in a population of subjects with hip pathology. Use of functional performance tests in the assessment of hip dysfunction has not been well established in the current literature. Diminished squat depth and provocation of pain during the single-leg balance test have been associated with patients diagnosed with FAI and gluteal tendinopathy, respectively. The SEBT and single-leg squat tests provided evidence of convergent validity through an analysis of kinematics and muscle function in normal subjects. Reliability of functional performance tests have not been established on patients with hip dysfunction. Further study is needed to establish reliability and validity of functional performance tests that can be used in a young, athletic population with hip dysfunction. 2b (Systematic Review of Literature).
Neijenhuijs, Koen I; Jansen, Femke; Aaronson, Neil K; Brédart, Anne; Groenvold, Mogens; Holzner, Bernhard; Terwee, Caroline B; Cuijpers, Pim; Verdonck-de Leeuw, Irma M
2018-05-07
The EORTC IN-PATSAT32 is a patient-reported outcome measure (PROM) to assess cancer patients' satisfaction with in-patient health care. The aim of this study was to investigate whether the initial good measurement properties of the IN-PATSAT32 are confirmed in new studies. Within the scope of a larger systematic review study (Prospero ID 42017057237), a systematic search was performed of Embase, Medline, PsycINFO, and Web of Science for studies that investigated measurement properties of the IN-PATSAT32 up to July 2017. Study quality was assessed, data were extracted, and synthesized according to the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) methodology. Nine studies were included in this review. The evidence on reliability and construct validity were rated as sufficient and of the quality of the evidence as moderate. The evidence on structural validity was rated as insufficient and of low quality. The evidence on internal consistency was indeterminate. Measurement error, responsiveness, criterion validity, and cross-cultural validity were not reported in the included studies. Measurement error could be calculated for two studies and was judged indeterminate. In summary, the IN-PATSAT32 performs as expected with respect to reliability and construct validity. No firm conclusions can be made yet whether the IN-PATSAT32 also performs as well with respect to structural validity and internal consistency. Further research on these measurement properties of the PROM is therefore needed as well as on measurement error, responsiveness, criterion validity, and cross-cultural validity. For future studies, it is recommended to take the COSMIN methodology into account.
Shin, Sangmun; Choi, Du Hyung; Truong, Nguyen Khoa Viet; Kim, Nam Ah; Chu, Kyung Rok; Jeong, Seong Hoon
2011-04-04
A new experimental design methodology was developed by integrating the response surface methodology and the time series modeling. The major purposes were to identify significant factors in determining swelling and release rate from matrix tablets and their relative factor levels for optimizing the experimental responses. Properties of tablet swelling and drug release were assessed with ten factors and two default factors, a hydrophilic model drug (terazosin) and magnesium stearate, and compared with target values. The selected input control factors were arranged in a mixture simplex lattice design with 21 experimental runs. The obtained optimal settings for gelation were PEO, LH-11, Syloid, and Pharmacoat with weight ratios of 215.33 (88.50%), 5.68 (2.33%), 19.27 (7.92%), and 3.04 (1.25%), respectively. The optimal settings for drug release were PEO and citric acid with weight ratios of 191.99 (78.91%) and 51.32 (21.09%), respectively. Based on the results of matrix swelling and drug release, the optimal solutions, target values, and validation experiment results over time were similar and showed consistent patterns with very small biases. The experimental design methodology could be a very promising experimental design method to obtain maximum information with limited time and resources. It could also be very useful in formulation studies by providing a systematic and reliable screening method to characterize significant factors in the sustained release matrix tablet. Copyright © 2011 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Cook, David A.; Zendejas, Benjamin; Hamstra, Stanley J.; Hatala, Rose; Brydges, Ryan
2014-01-01
Ongoing transformations in health professions education underscore the need for valid and reliable assessment. The current standard for assessment validation requires evidence from five sources: content, response process, internal structure, relations with other variables, and consequences. However, researchers remain uncertain regarding the types…
A miniaturized, optically accessible bioreactor for systematic 3D tissue engineering research.
Laganà, Matteo; Raimondi, Manuela T
2012-02-01
Perfusion bioreactors are widely used in tissue engineering and pharmaceutical research to provide reliable models of tissue growth under controlled conditions. Destructive assays are not able to follow the evolution of the growing tissue on the same construct, so it is necessary to adopt non-destructive analysis. We have developed a miniaturized, optically accessible bioreactor for interstitial perfusion of 3D cell-seeded scaffolds. The scaffold adopted was optically transparent, with highly defined architecture. Computational fluid dynamics (CFD) analysis was useful to predict the flow behavior in the bioreactor scaffold chamber (that was laminar flow, Re = 0.179, with mean velocity equal to 100 microns/s). Moreover, experimental characterization of the bioreactor performance gave that the maximum allowable pressure was 0.06 MPa and allowable flow rate up to 25 ml/min. A method, to estimate quantitatively and non destructively the cell proliferation (from 15 to 43 thousand cells) and tissue growth (from 2% to 43%) during culture time, was introduced and validated. An end point viability test was performed to check the experimental set-up overall suitability for cell culture with successful results. Morphological analysis was performed at the end time point to show the complex tridimensional pattern of the biological tissue growth. Our system, characterized by controlled conditions in a wide range of allowable flow rate and pressure, permits to systematically study the influence of several parameters on engineered tissue growth, using viable staining and a standard fluorescence microscope.
Dube, Loveness; Van den Broucke, Stephan; Housiaux, Marie; Dhoore, William; Rendall-Mkosi, Kirstie
2015-02-01
Although self-management education is a key factor in the care for diabetes patients, its implementation in developing countries is not well documented. This systematic review considers the published literature on diabetes self-management education in high and low mortality developing countries. The aim is to provide a state of the art of current practices and assess program outcomes, cultural sensitivity, and accessibility to low literate patients. The Cochrane Library, PubMed, MEDLINE, PsycInfo, and PsycArticles databases were searched for peer-reviewed articles on type 2 diabetes published in English between 2009 and 2013. The World Bank and WHO burden of disease criteria were applied to distinguish between developing countries with high and low mortality. Information was extracted using a validated checklist. Three reviews and 23 primary studies were identified, 18 of which were from low mortality developing countries. Studies from high mortality countries were mostly quasi-experimental, those from low mortality countries experimental. Interventions were generally effective on behavior change and patients' glycemic control in the short term (≤9 months). While 57% of the studies mentioned cultural tailoring of interventions, only 17% reported on training of providers, and 39% were designed to be accessible for people with low literacy. The limited studies available suggest that diabetes self-management education programs in developing countries are effective in the short term but must be tailored to conform to the cultural aspects of the target population. © 2014 The Author(s).
Koohestani, Hamid Reza; Soltani Arabshahi, Seyed Kamran; Fata, Ladan; Ahmadi, Fazlollah
2018-04-01
The demand for mobile learning in the medical science educational program is increasing. The present review study gathers evidence highlighted by the experimental studies on the educational effects of mobile learning for medical science students. The study was carried out as a systematic literature search published from 2007 to July 2017 in the databases PubMed/Medline, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Web of Knowledge (Thomson Reuters) , Educational Resources and Information Center (ERIC), EMBASE (Elsevier), Cochrane library, PsycINFO and Google Scholar. To examine quality of the articles, a tool validated by the BEME Review was employed. Totally, 21 papers entered the study. Three main themes emerged from the content of papers: (1) improvement in student clinical competency and confidence, (2) acquisition and enhancing of students' theoretical knowledge, and (3) students' positive attitudes to and perception of mobile learning. Level 2B of Kirkpatrick hierarchy had been examined by all the papers and seven of them had reported two or more outcome levels, but level 4 was not reported in the papers. Our review showed that the students of medical sciences had positive response and attitudes to mobile learning. Moreover, implementation of mobile learning in medical sciences program might lead to valuable educational benefits and improve clinical competence and confidence along with theoretical knowledge, attitudes, and perception of mobile learning. The results indicated that mobile learning strategy in medical education can positively affect learning in all three domains of Bloom's Taxonomy.
Nascimento-Ferreira, Marcus V; Collese, Tatiana S; de Moraes, Augusto César F; Rendo-Urteaga, Tara; Moreno, Luis A; Carvalho, Heráclito B
2016-12-01
Sleep duration has been associated with several health outcomes in children and adolescents. As an extensive number of questionnaires are currently used to investigate sleep schedule or sleep time, we performed a systematic review of criterion validation of sleep time questionnaires for children and adolescents, considering accelerometers as the reference method. We found a strong correlation between questionnaires and accelerometers for weeknights and a moderate correlation for weekend nights. When considering only studies performing a reliability assessment of the used questionnaires, a significant increase in the correlations for both weeknights and weekend nights was observed. In conclusion, moderate to strong criterion validity of sleep time questionnaires was observed; however, the reliability assessment of the questionnaires showed strong validation performance. Copyright © 2015 Elsevier Ltd. All rights reserved.
Brurberg, Kjetil Gundro; Fønhus, Marita Sporstøl; Larun, Lillebeth; Flottorp, Signe; Malterud, Kirsti
2014-01-01
Objective To identify case definitions for chronic fatigue syndrome/myalgic encephalomyelitis (CFS/ME), and explore how the validity of case definitions can be evaluated in the absence of a reference standard. Design Systematic review. Setting International. Participants A literature search, updated as of November 2013, led to the identification of 20 case definitions and inclusion of 38 validation studies. Primary and secondary outcome measure Validation studies were assessed for risk of bias and categorised according to three validation models: (1) independent application of several case definitions on the same population, (2) sequential application of different case definitions on patients diagnosed with CFS/ME with one set of diagnostic criteria or (3) comparison of prevalence estimates from different case definitions applied on different populations. Results A total of 38 studies contributed data of sufficient quality and consistency for evaluation of validity, with CDC-1994/Fukuda as the most frequently applied case definition. No study rigorously assessed the reproducibility or feasibility of case definitions. Validation studies were small with methodological weaknesses and inconsistent results. No empirical data indicated that any case definition specifically identified patients with a neuroimmunological condition. Conclusions Classification of patients according to severity and symptom patterns, aiming to predict prognosis or effectiveness of therapy, seems useful. Development of further case definitions of CFS/ME should be given a low priority. Consistency in research can be achieved by applying diagnostic criteria that have been subjected to systematic evaluation. PMID:24508851
Pichler, Martin; Stiegelbauer, Verena; Vychytilova-Faltejskova, Petra; Ivan, Cristina; Ling, Hui; Winter, Elke; Zhang, Xinna; Goblirsch, Matthew; Wulf-Goldenberg, Annika; Ohtsuka, Masahisa; Haybaeck, Johannes; Svoboda, Marek; Okugawa, Yoshinaga; Gerger, Armin; Hoefler, Gerald; Goel, Ajay; Slaby, Ondrej; Calin, George Adrian
2017-01-01
Purpose Characterization of colorectal cancer transcriptome by high-throughput techniques has enabled the discovery of several differentially expressed genes involving previously unreported miRNA abnormalities. Here, we followed a systematic approach on a global scale to identify miRNAs as clinical outcome predictors and further validated them in the clinical and experimental setting. Experimental Design Genome-wide miRNA sequencing data of 228 colorectal cancer patients from The Cancer Genome Atlas dataset were analyzed as a screening cohort to identify miRNAs significantly associated with survival according to stringent prespecified criteria. A panel of six miRNAs was further validated for their prognostic utility in a large independent validation cohort (n = 332). In situ hybridization and functional experiments in a panel of colorectal cancer cell lines and xenografts further clarified the role of clinical relevant miRNAs. Results Six miRNAs (miR-92b-3p, miR-188-3p, miR-221-5p, miR-331-3p, miR-425-3p, and miR-497-5p) were identified as strong predictors of survival in the screening cohort. High miR-188-3p expression proves to be an independent prognostic factor [screening cohort: HR = 4.137; 95% confidence interval (CI), 1.568–10.917; P = 0.004; validation cohort: HR = 1.538; 95% CI, 1.107–2.137; P = 0.010, respectively]. Forced miR-188-3p expression increased migratory behavior of colorectal cancer cells in vitro and metastases formation in vivo (P < 0.05). The promigratory role of miR-188-3p is mediated by direct interaction with MLLT4, a novel identified player involved in colorectal cancer cell migration. Conclusions miR-188-3p is a novel independent prognostic factor in colorectal cancer patients, which can be partly explained by its effect on MLLT4 expression and migration of cancer cells. PMID:27601590
NASA Technical Reports Server (NTRS)
Yang, H. Q.; West, Jeff
2018-01-01
Determination of slosh damping is a very challenging task as there is no analytical solution. The damping physics involves the vorticity dissipation which requires the full solution of the nonlinear Navier-Stokes equations. As a result, previous investigations were mainly carried out by extensive experiments. A systematical study is needed to understand the damping physics of baffled tanks, to identify the difference between the empirical Miles equation and experimental measurements, and to develop new semi-empirical relations to better represent the real damping physics. The approach of this study is to use Computational Fluid Dynamics (CFD) technology to shed light on the damping mechanisms of a baffled tank. First, a 1-D Navier-Stokes equation representing different length scales and time scales in the baffle damping physics is developed and analyzed. Loci-STREAM-VOF, a well validated CFD solver developed at NASA MSFC, is applied to study the vorticity field around a baffle and around the fluid-gas interface to highlight the dissipation mechanisms at different slosh amplitudes. Previous measurement data is then used to validate the CFD damping results. The study found several critical parameters controlling fluid damping from a baffle: local slosh amplitude to baffle thickness (A/t), surface liquid depth to tank radius (d/R), local slosh amplitude to baffle width (A/W); and non-dimensional slosh frequency. The simulation highlights three significant damping regimes where different mechanisms dominate. The study proves that the previously found discrepancies between Miles equation and experimental measurement are not due to the measurement scatter, but rather due to different damping mechanisms at various slosh amplitudes. The limitations on the use of Miles equation are discussed based on the flow regime.
NASA Technical Reports Server (NTRS)
Yang, H. Q.; West, Jeff
2016-01-01
Determination of slosh damping is a very challenging task as there is no analytical solution. The damping physics involves the vorticity dissipation which requires the full solution of the nonlinear Navier-Stokes equations. As a result, previous investigations were mainly carried out by extensive experiments. A systematical study is needed to understand the damping physics of baffled tanks, to identify the difference between the empirical Miles equation and experimental measurements, and to develop new semi-empirical relations to better represent the real damping physics. The approach of this study is to use Computational Fluid Dynamics (CFD) technology to shed light on the damping mechanisms of a baffled tank. First, a 1-D Navier-Stokes equation representing different length scales and time scales in the baffle damping physics is developed and analyzed. Loci-STREAM-VOF, a well validated CFD solver developed at NASA MSFC, is applied to study the vorticity field around a baffle and around the fluid-gas interface to highlight the dissipation mechanisms at different slosh amplitudes. Previous measurement data is then used to validate the CFD damping results. The study found several critical parameters controlling fluid damping from a baffle: local slosh amplitude to baffle thickness (A/t), surface liquid depth to tank radius (d/R), local slosh amplitude to baffle width (A/W); and non-dimensional slosh frequency. The simulation highlights three significant damping regimes where different mechanisms dominate. The study proves that the previously found discrepancies between Miles equation and experimental measurement are not due to the measurement scatter, but rather due to different damping mechanisms at various slosh amplitudes. The limitations on the use of Miles equation are discussed based on the flow regime.
Snodgrass, Melinda R; Chung, Moon Y; Meadan, Hedda; Halle, James W
2018-03-01
Single-case research (SCR) has been a valuable methodology in special education research. Montrose Wolf (1978), an early pioneer in single-case methodology, coined the term "social validity" to refer to the social importance of the goals selected, the acceptability of procedures employed, and the effectiveness of the outcomes produced in applied investigations. Since 1978, many contributors to SCR have included social validity as a feature of their articles and several authors have examined the prevalence and role of social validity in SCR. We systematically reviewed all SCR published in six highly-ranked special education journals from 2005 to 2016 to establish the prevalence of social validity assessments and to evaluate their scientific rigor. We found relatively low, but stable prevalence with only 28 publications addressing all three factors of the social validity construct (i.e., goals, procedures, outcomes). We conducted an in-depth analysis of the scientific rigor of these 28 publications. Social validity remains an understudied construct in SCR, and the scientific rigor of social validity assessments is often lacking. Implications and future directions are discussed. Copyright © 2018 Elsevier Ltd. All rights reserved.
Moorkanikkara, Srinivas Nageswaran; Blankschtein, Daniel
2009-02-03
Traditionally, surfactant bulk solutions in which dynamic surface tension (DST) measurements are conducted using the pendant-bubble apparatus are assumed to be quiescent. Consequently, the transport of surfactant molecules in the bulk solution is often modeled as being purely diffusive when analyzing the experimental pendant-bubble DST data. In this Article, we analyze the experimental pendant-bubble DST data of the alkyl poly (ethylene oxide) nonionic surfactants, C12E4 and C12E6, and demonstrate that both surfactants exhibit "superdiffusive" adsorption kinetics behavior with characteristics that challenge the traditional assumption of a quiescent surfactant bulk solution. In other words, the observed superdiffusive adsorption behavior points to the possible existence of convection currents in the surfactant bulk solution. The analysis presented here involves the following steps: (1) constructing an adsorption kinetics model that corresponds to the fastest rate at which surfactant molecules adsorb onto the actual pendant-bubble surface from a quiescent solution, (2) predicting the DST behaviors of C12E4 and C12E6 at several surfactant bulk solution concentrations using the model constructed in step 1, and (3) comparing the predicted DST profiles with the experimental DST profiles. This comparison reveals systematic deviations for both C12E4 and C12E6 with the following characteristics: (a) the experimental DST profiles exhibit adsorption kinetics behavior, which is faster than the predicted fastest rate of surfactant adsorption from a quiescent surfactant bulk solution at time scales greater than 100 s, and (b) the experimental DST profiles and the predicted DST behaviors approach the same equilibrium surface tension values. Characteristic (b) indicates that the cause of the observed systematic deviations may be associated with the adsorption kinetics mechanism adopted in the model used rather than with the equilibrium behavior. Characteristic (a) indicates that the actual surfactant bulk solution in which the DST measurement was conducted, most likely, cannot be considered to be quiescent at time scales greater than 100 s. Accordingly, the observed superdiffusive adsorption behavior is interpreted as resulting from convection currents present in a nonquiescent surfactant bulk solution. Convection currents accelerate the surfactant adsorption process by increasing the rate of surfactant transport in the bulk solution. The systematic nature of the deviations observed between the predicted DST profiles and the experimental DST behavior for C12E4 and C12E6 suggests that the nonquiescent nature of the surfactant bulk solution may be intrinsic to the experimental pendant-bubble DST measurement approach. To validate this possibility, we identified generic features in the experimental DST data when DST measurements are conducted in a nonquiescent surfactant bulk solution, and the DST measurements are analyzed assuming that the surfactant bulk solution is quiescent. An examination of the DST literature reveals that these identified generic features are quite general and are observed in the experimental DST data of several other surfactants (decanol, nonanol, C10E8, C14E8, C12E8, and C10E4) measured using the pendant-bubble apparatus.
Systematic review of prediction models for delirium in the older adult inpatient.
Lindroth, Heidi; Bratzke, Lisa; Purvis, Suzanne; Brown, Roger; Coburn, Mark; Mrkobrada, Marko; Chan, Matthew T V; Davis, Daniel H J; Pandharipande, Pratik; Carlsson, Cynthia M; Sanders, Robert D
2018-04-28
To identify existing prognostic delirium prediction models and evaluate their validity and statistical methodology in the older adult (≥60 years) acute hospital population. Systematic review. PubMed, CINAHL, PsychINFO, SocINFO, Cochrane, Web of Science and Embase were searched from 1 January 1990 to 31 December 2016. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses and CHARMS Statement guided protocol development. age >60 years, inpatient, developed/validated a prognostic delirium prediction model. alcohol-related delirium, sample size ≤50. The primary performance measures were calibration and discrimination statistics. Two authors independently conducted search and extracted data. The synthesis of data was done by the first author. Disagreement was resolved by the mentoring author. The initial search resulted in 7,502 studies. Following full-text review of 192 studies, 33 were excluded based on age criteria (<60 years) and 27 met the defined criteria. Twenty-three delirium prediction models were identified, 14 were externally validated and 3 were internally validated. The following populations were represented: 11 medical, 3 medical/surgical and 13 surgical. The assessment of delirium was often non-systematic, resulting in varied incidence. Fourteen models were externally validated with an area under the receiver operating curve range from 0.52 to 0.94. Limitations in design, data collection methods and model metric reporting statistics were identified. Delirium prediction models for older adults show variable and typically inadequate predictive capabilities. Our review highlights the need for development of robust models to predict delirium in older inpatients. We provide recommendations for the development of such models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Tsai, Alexander C.
2014-01-01
OBJECTIVES To systematically review the reliability and validity of instruments used to screen for major depressive disorder or assess depression symptom severity among persons with HIV in sub-Saharan Africa. DESIGN Systematic review and meta-analysis. METHODS A systematic evidence search protocol was applied to seven bibliographic databases. Studies examining the reliability and/or validity of depression assessment tools were selected for inclusion if they were based on data collected from HIV-positive adults in any African member state of the United Nations. Random-effects meta-analysis was employed to calculate pooled estimates of depression prevalence. In a subgroup of studies of criterion-related validity, the bivariate random-effects model was used to calculate pooled estimates of sensitivity and specificity. RESULTS Of 1,117 records initially identified, I included 13 studies of 5,373 persons with HIV in 7 sub-Saharan African countries. Reported estimates of Cronbach’s alpha ranged from 0.63–0.95, and analyses of internal structure generally confirmed the existence of a depression-like construct accounting for a substantial portion of variance. The pooled prevalence of probable depression was 29.5% (95% CI, 20.5–39.4), while the pooled prevalence of major depressive disorder was 13.9% (95% CI, 9.7–18.6). The Center for Epidemiologic Studies-Depression scale was the most frequently studied instrument, with a pooled sensitivity of 0.82 (95% CI, 0.73–0.87) for detecting major depressive disorder. CONCLUSIONS Depression screening instruments yielded relatively high false positive rates. Overall, few studies described the reliability and/or validity of depression instruments in sub-Saharan Africa. PMID:24853307
Validation of a common data model for active safety surveillance research
Ryan, Patrick B; Reich, Christian G; Hartzema, Abraham G; Stang, Paul E
2011-01-01
Objective Systematic analysis of observational medical databases for active safety surveillance is hindered by the variation in data models and coding systems. Data analysts often find robust clinical data models difficult to understand and ill suited to support their analytic approaches. Further, some models do not facilitate the computations required for systematic analysis across many interventions and outcomes for large datasets. Translating the data from these idiosyncratic data models to a common data model (CDM) could facilitate both the analysts' understanding and the suitability for large-scale systematic analysis. In addition to facilitating analysis, a suitable CDM has to faithfully represent the source observational database. Before beginning to use the Observational Medical Outcomes Partnership (OMOP) CDM and a related dictionary of standardized terminologies for a study of large-scale systematic active safety surveillance, the authors validated the model's suitability for this use by example. Validation by example To validate the OMOP CDM, the model was instantiated into a relational database, data from 10 different observational healthcare databases were loaded into separate instances, a comprehensive array of analytic methods that operate on the data model was created, and these methods were executed against the databases to measure performance. Conclusion There was acceptable representation of the data from 10 observational databases in the OMOP CDM using the standardized terminologies selected, and a range of analytic methods was developed and executed with sufficient performance to be useful for active safety surveillance. PMID:22037893
Lagarde, Marloes L J; Kamalski, Digna M A; van den Engel-Hoek, Lenie
2016-02-01
To systematically review the available evidence for the reliability and validity of cervical auscultation in diagnosing the several aspects of dysphagia in adults and children suffering from dysphagia. Medline (PubMed), Embase and the Cochrane Library databases. The systematic review was carried out applying the steps of the PRISMA-statement. The methodological quality of the included studies were evaluated using the Dutch 'Cochrane checklist for diagnostic accuracy studies'. A total of 90 articles were identified through the search strategy, and after applying the inclusion and exclusion criteria, six articles were included in this review. In the six studies, 197 patients were assessed with cervical auscultation. Two of the six articles were considered to be of 'good' quality and three studies were of 'moderate' quality. One article was excluded because of a 'poor' methodological quality. Sensitivity ranges from 23%-94% and specificity ranges from 50%-74%. Inter-rater reliability was 'poor' or 'fair' in all studies. The intra-rater reliability shows a wide variance among speech language therapists. In this systematic review, conflicting evidence is found for the validity of cervical auscultation. The reliability of cervical auscultation is insufficient when used as a stand-alone tool in the diagnosis of dysphagia in adults. There is no available evidence for the validity and reliability of cervical auscultation in children. Cervical auscultation should not be used as a stand-alone instrument to diagnose dysphagia. © The Author(s) 2015.
Carbine, Kaylie A; Rodeback, Rebekah; Modersitzki, Erin; Miner, Marshall; LeCheminant, James D; Larson, Michael J
2018-05-19
Daily dietary decisions have the potential to impact our physical, mental, and emotional health. Event-related potentials (ERPs) can provide insight into cognitive processes, such as attention, working memory, and inhibitory control, that may influence the food-related decisions we make on a daily basis. We conducted a systematic review of the food-related cognition and ERP research in order to summarize the extant literature, identify future research questions, synthesize how food-related ERP components relate to eating habits and appetite, and demonstrate the utility of ERPs in examining food-related cognition. Forty-three articles were systematically extracted. In general, results indicated food cues compared to less palatable foods or neutral cues elicited greater ERP amplitudes reflecting early or late attention allocation (e.g., increased P2, P3, late positive potential amplitudes). Food cues were associated with increased frontocentral P3 and N2 ERP amplitudes compared to neutral or less palatable food cues, suggesting increased recruitment of inhibitory control and conflict monitoring resources. However, there was significant heterogeneity in the literature, as experimental tasks, stimuli, and examined ERP components varied widely across studies, and therefore replication studies are needed. In-depth research is also needed to establish how food-related ERPs differ by BMI groups and relate to real-world eating habits and appetite, in order to establish the ecological validity. Copyright © 2018. Published by Elsevier Ltd.
Hepatic fibrosis: Concept to treatment.
Trautwein, Christian; Friedman, Scott L; Schuppan, Detlef; Pinzani, Massimo
2015-04-01
Understanding the molecular mechanisms underlying liver fibrogenesis is fundamentally relevant to developing new treatments that are independent of the underlying etiology. The increasing success of antiviral treatments in blocking or reversing the fibrogenic progression of chronic liver disease has unearthed vital information about the natural history of fibrosis regression, and has established important principles and targets for antifibrotic drugs. Although antifibrotic activity has been demonstrated for many compounds in vitro and in animal models, none has been thoroughly validated in the clinic or commercialized as a therapy for fibrosis. In addition, it is likely that combination therapies that affect two or more key pathogenic targets and/or pathways will be needed. To accelerate the preclinical development of these combination therapies, reliable single target validation is necessary, followed by the rational selection and systematic testing of combination approaches. Improved noninvasive tools for the assessment of fibrosis content, fibrogenesis and fibrolysis must accompany in vivo validation in experimental fibrosis models, and especially in clinical trials. The rapidly changing landscape of clinical trial design for liver disease is recognized by regulatory agencies in the United States (FDA) and Western Europe (EMA), who are working together with the broad range of stakeholders to standardize approaches to testing antifibrotic drugs in cohorts of patients with chronic liver diseases. Copyright © 2015. Published by Elsevier B.V.
VALUE - Validating and Integrating Downscaling Methods for Climate Change Research
NASA Astrophysics Data System (ADS)
Maraun, Douglas; Widmann, Martin; Benestad, Rasmus; Kotlarski, Sven; Huth, Radan; Hertig, Elke; Wibig, Joanna; Gutierrez, Jose
2013-04-01
Our understanding of global climate change is mainly based on General Circulation Models (GCMs) with a relatively coarse resolution. Since climate change impacts are mainly experienced on regional scales, high-resolution climate change scenarios need to be derived from GCM simulations by downscaling. Several projects have been carried out over the last years to validate the performance of statistical and dynamical downscaling, yet several aspects have not been systematically addressed: variability on sub-daily, decadal and longer time-scales, extreme events, spatial variability and inter-variable relationships. Different downscaling approaches such as dynamical downscaling, statistical downscaling and bias correction approaches have not been systematically compared. Furthermore, collaboration between different communities, in particular regional climate modellers, statistical downscalers and statisticians has been limited. To address these gaps, the EU Cooperation in Science and Technology (COST) action VALUE (www.value-cost.eu) has been brought into life. VALUE is a research network with participants from currently 23 European countries running from 2012 to 2015. Its main aim is to systematically validate and develop downscaling methods for climate change research in order to improve regional climate change scenarios for use in climate impact studies. Inspired by the co-design idea of the international research initiative "future earth", stakeholders of climate change information have been involved in the definition of research questions to be addressed and are actively participating in the network. The key idea of VALUE is to identify the relevant weather and climate characteristics required as input for a wide range of impact models and to define an open framework to systematically validate these characteristics. Based on a range of benchmark data sets, in principle every downscaling method can be validated and compared with competing methods. The results of this exercise will directly provide end users with important information about the uncertainty of regional climate scenarios, and will furthermore provide the basis for further developing downscaling methods. This presentation will provide background information on VALUE and discuss the identified characteristics and the validation framework.
Thomas, Philipp; Rammsayer, Thomas; Schweizer, Karl; Troche, Stefan
2015-01-01
Numerous studies reported a strong link between working memory capacity (WMC) and fluid intelligence (Gf), although views differ in respect to how close these two constructs are related to each other. In the present study, we used a WMC task with five levels of task demands to assess the relationship between WMC and Gf by means of a new methodological approach referred to as fixed-links modeling. Fixed-links models belong to the family of confirmatory factor analysis (CFA) and are of particular interest for experimental, repeated-measures designs. With this technique, processes systematically varying across task conditions can be disentangled from processes unaffected by the experimental manipulation. Proceeding from the assumption that experimental manipulation in a WMC task leads to increasing demands on WMC, the processes systematically varying across task conditions can be assumed to be WMC-specific. Processes not varying across task conditions, on the other hand, are probably independent of WMC. Fixed-links models allow for representing these two kinds of processes by two independent latent variables. In contrast to traditional CFA where a common latent variable is derived from the different task conditions, fixed-links models facilitate a more precise or purified representation of the WMC-related processes of interest. By using fixed-links modeling to analyze data of 200 participants, we identified a non-experimental latent variable, representing processes that remained constant irrespective of the WMC task conditions, and an experimental latent variable which reflected processes that varied as a function of experimental manipulation. This latter variable represents the increasing demands on WMC and, hence, was considered a purified measure of WMC controlled for the constant processes. Fixed-links modeling showed that both the purified measure of WMC (β = .48) as well as the constant processes involved in the task (β = .45) were related to Gf. Taken together, these two latent variables explained the same portion of variance of Gf as a single latent variable obtained by traditional CFA (β = .65) indicating that traditional CFA causes an overestimation of the effective relationship between WMC and Gf. Thus, fixed-links modeling provides a feasible method for a more valid investigation of the functional relationship between specific constructs.
Experimental validation of structural optimization methods
NASA Technical Reports Server (NTRS)
Adelman, Howard M.
1992-01-01
The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.
Inclusion of quasi-experimental studies in systematic reviews of health systems research.
Rockers, Peter C; Røttingen, John-Arne; Shemilt, Ian; Tugwell, Peter; Bärnighausen, Till
2015-04-01
Systematic reviews of health systems research commonly limit studies for evidence synthesis to randomized controlled trials. However, well-conducted quasi-experimental studies can provide strong evidence for causal inference. With this article, we aim to stimulate and inform discussions on including quasi-experiments in systematic reviews of health systems research. We define quasi-experimental studies as those that estimate causal effect sizes using exogenous variation in the exposure of interest that is not directly controlled by the researcher. We incorporate this definition into a non-hierarchical three-class taxonomy of study designs - experiments, quasi-experiments, and non-experiments. Based on a review of practice in three disciplines related to health systems research (epidemiology, economics, and political science), we discuss five commonly used study designs that fit our definition of quasi-experiments: natural experiments, instrumental variable analyses, regression discontinuity analyses, interrupted times series studies, and difference studies including controlled before-and-after designs, difference-in-difference designs and fixed effects analyses of panel data. We further review current practices regarding quasi-experimental studies in three non-health fields that utilize systematic reviews (education, development, and environment studies) to inform the design of approaches for synthesizing quasi-experimental evidence in health systems research. Ultimately, the aim of any review is practical: to provide useful information for policymakers, practitioners, and researchers. Future work should focus on building a consensus among users and producers of systematic reviews regarding the inclusion of quasi-experiments. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Razak, Jeefferie Abd; Ahmad, Sahrim Haji; Ratnam, Chantara Thevy; Mahamood, Mazlin Aida; Yaakub, Juliana; Mohamad, Noraiham
2014-09-01
Fractional 25 two-level factorial design of experiment (DOE) was applied to systematically prepare the NR/EPDM blend using Haake internal mixer set-up. The process model of rubber blend preparation that correlates the relationships between the mixer process input parameters and the output response of blend compatibility was developed. Model analysis of variance (ANOVA) and model fitting through curve evaluation finalized the R2 of 99.60% with proposed parametric combination of A = 30/70 NR/EPDM blend ratio; B = 70°C mixing temperature; C = 70 rpm of rotor speed; D = 5 minutes of mixing period and E = 1.30 phr EPDM-g-MAH compatibilizer addition, with overall 0.966 desirability. Model validation with small deviation at +2.09% confirmed the repeatability of the mixing strategy with valid maximum tensile strength output representing the blend miscibility. Theoretical calculation of NR/EPDM blend compatibility is also included and compared. In short, this study provides a brief insight on the utilization of DOE for experimental simplification and parameter inter-correlation studies, especially when dealing with multiple variables during elastomeric rubber blend preparation.
A Novel Protocol for Model Calibration in Biological Wastewater Treatment
Zhu, Ao; Guo, Jianhua; Ni, Bing-Jie; Wang, Shuying; Yang, Qing; Peng, Yongzhen
2015-01-01
Activated sludge models (ASMs) have been widely used for process design, operation and optimization in wastewater treatment plants. However, it is still a challenge to achieve an efficient calibration for reliable application by using the conventional approaches. Hereby, we propose a novel calibration protocol, i.e. Numerical Optimal Approaching Procedure (NOAP), for the systematic calibration of ASMs. The NOAP consists of three key steps in an iterative scheme flow: i) global factors sensitivity analysis for factors fixing; ii) pseudo-global parameter correlation analysis for non-identifiable factors detection; and iii) formation of a parameter subset through an estimation by using genetic algorithm. The validity and applicability are confirmed using experimental data obtained from two independent wastewater treatment systems, including a sequencing batch reactor and a continuous stirred-tank reactor. The results indicate that the NOAP can effectively determine the optimal parameter subset and successfully perform model calibration and validation for these two different systems. The proposed NOAP is expected to use for automatic calibration of ASMs and be applied potentially to other ordinary differential equations models. PMID:25682959
Modified Welding Technique of a Hypo-Eutectic Al-Cu Alloy for Higher Mechanical Properties
NASA Astrophysics Data System (ADS)
Ghosh, B. R.; Gupta, R. K.; Biju, S.; Sinha, P. P.
GTAW process is used for welding of pressure vessels made of hypo-eutectic Al-Cu alloy AA2219 containing 6.3% Cu. As welded Yield strength of the alloy was found to be in the range of 140-150 MPa, using conventional single pass GTAW technique on both AC and DCSP modes. Interestingly, it was also found that weld-strength decreased with increase in thickness of the weld coupons. Welding metallurgy of AA2219 Al alloy was critically reviewed and factors responsible for lower properties were identified. Multipass GTAW on DCSP mode was postulated to improve the weld strength of this alloy. A systematic experimentation using 12 mm thick plates was carried out and YS of 200 MPa has been achieved in the as welded condition. Thorough characterization including optical and electron microscopy was conducted to validate the metallurgical phenomena attributable to improvement in weld strength. This paper presents the conceptual understanding of welding metallurgy of AA2219 alloy and validation by experiments, which could lead to better weld properties using multipass GTAW on DCSP mode.
Optimal Modality Selection for Cooperative Human-Robot Task Completion.
Jacob, Mithun George; Wachs, Juan P
2016-12-01
Human-robot cooperation in complex environments must be fast, accurate, and resilient. This requires efficient communication channels where robots need to assimilate information using a plethora of verbal and nonverbal modalities such as hand gestures, speech, and gaze. However, even though hybrid human-robot communication frameworks and multimodal communication have been studied, a systematic methodology for designing multimodal interfaces does not exist. This paper addresses the gap by proposing a novel methodology to generate multimodal lexicons which maximizes multiple performance metrics over a wide range of communication modalities (i.e., lexicons). The metrics are obtained through a mixture of simulation and real-world experiments. The methodology is tested in a surgical setting where a robot cooperates with a surgeon to complete a mock abdominal incision and closure task by delivering surgical instruments. Experimental results show that predicted optimal lexicons significantly outperform predicted suboptimal lexicons (p <; 0.05) in all metrics validating the predictability of the methodology. The methodology is validated in two scenarios (with and without modeling the risk of a human-robot collision) and the differences in the lexicons are analyzed.
NASA Technical Reports Server (NTRS)
Ahuja, K. K.; Mendoza, J.
1995-01-01
This report documents the results of an experimental investigation on the response of a cavity to external flowfields. The primary objective of this research was to acquire benchmark of data on the effects of cavity length, width, depth, upstream boundary layer, and flow temperature on cavity noise. These data were to be used for validation of computational aeroacoustic (CAA) codes on cavity noise. To achieve this objective, a systematic set of acoustic and flow measurements were made for subsonic turbulent flows approaching a cavity. These measurements were conducted in the research facilities of the Georgia Tech research institute. Two cavity models were designed, one for heated flow and another for unheated flow studies. Both models were designed such that the cavity length (L) could easily be varied while holding fixed the depth (D) and width (W) dimensions of the cavity. Depth and width blocks were manufactured so that these dimensions could be varied as well. A wall jet issuing from a rectangular nozzle was used to simulate flows over the cavity.
Cooke, Steven J; Birnie-Gauvin, Kim; Lennox, Robert J; Taylor, Jessica J; Rytwinski, Trina; Rummer, Jodie L; Franklin, Craig E; Bennett, Joseph R; Haddaway, Neal R
2017-01-01
Policy development and management decisions should be based upon the best available evidence. In recent years, approaches to evidence synthesis, originating in the medical realm (such as systematic reviews), have been applied to conservation to promote evidence-based conservation and environmental management. Systematic reviews involve a critical appraisal of evidence, but studies that lack the necessary rigour (e.g. experimental, technical and analytical aspects) to justify their conclusions are typically excluded from systematic reviews or down-weighted in terms of their influence. One of the strengths of conservation physiology is the reliance on experimental approaches that help to more clearly establish cause-and-effect relationships. Indeed, experimental biology and ecology have much to offer in terms of building the evidence base that is needed to inform policy and management options related to pressing issues such as enacting endangered species recovery plans or evaluating the effectiveness of conservation interventions. Here, we identify a number of pitfalls that can prevent experimental findings from being relevant to conservation or would lead to their exclusion or down-weighting during critical appraisal in a systematic review. We conclude that conservation physiology is well positioned to support evidence-based conservation, provided that experimental designs are robust and that conservation physiologists understand the nuances associated with informing decision-making processes so that they can be more relevant.
Birnie-Gauvin, Kim; Lennox, Robert J.; Taylor, Jessica J.; Rytwinski, Trina; Rummer, Jodie L.; Franklin, Craig E.; Bennett, Joseph R.; Haddaway, Neal R.
2017-01-01
Abstract Policy development and management decisions should be based upon the best available evidence. In recent years, approaches to evidence synthesis, originating in the medical realm (such as systematic reviews), have been applied to conservation to promote evidence-based conservation and environmental management. Systematic reviews involve a critical appraisal of evidence, but studies that lack the necessary rigour (e.g. experimental, technical and analytical aspects) to justify their conclusions are typically excluded from systematic reviews or down-weighted in terms of their influence. One of the strengths of conservation physiology is the reliance on experimental approaches that help to more clearly establish cause-and-effect relationships. Indeed, experimental biology and ecology have much to offer in terms of building the evidence base that is needed to inform policy and management options related to pressing issues such as enacting endangered species recovery plans or evaluating the effectiveness of conservation interventions. Here, we identify a number of pitfalls that can prevent experimental findings from being relevant to conservation or would lead to their exclusion or down-weighting during critical appraisal in a systematic review. We conclude that conservation physiology is well positioned to support evidence-based conservation, provided that experimental designs are robust and that conservation physiologists understand the nuances associated with informing decision-making processes so that they can be more relevant. PMID:28835842
Zhou, Bailing; Zhao, Huiying; Yu, Jiafeng; Guo, Chengang; Dou, Xianghua; Song, Feng; Hu, Guodong; Cao, Zanxia; Qu, Yuanxu; Yang, Yuedong; Zhou, Yaoqi; Wang, Jihua
2018-01-04
Long non-coding RNAs (lncRNAs) play important functional roles in various biological processes. Early databases were utilized to deposit all lncRNA candidates produced by high-throughput experimental and/or computational techniques to facilitate classification, assessment and validation. As more lncRNAs are validated by low-throughput experiments, several databases were established for experimentally validated lncRNAs. However, these databases are small in scale (with a few hundreds of lncRNAs only) and specific in their focuses (plants, diseases or interactions). Thus, it is highly desirable to have a comprehensive dataset for experimentally validated lncRNAs as a central repository for all of their structures, functions and phenotypes. Here, we established EVLncRNAs by curating lncRNAs validated by low-throughput experiments (up to 1 May 2016) and integrating specific databases (lncRNAdb, LncRANDisease, Lnc2Cancer and PLNIncRBase) with additional functional and disease-specific information not covered previously. The current version of EVLncRNAs contains 1543 lncRNAs from 77 species that is 2.9 times larger than the current largest database for experimentally validated lncRNAs. Seventy-four percent lncRNA entries are partially or completely new, comparing to all existing experimentally validated databases. The established database allows users to browse, search and download as well as to submit experimentally validated lncRNAs. The database is available at http://biophy.dzu.edu.cn/EVLncRNAs. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Zhao, Huiying; Yu, Jiafeng; Guo, Chengang; Dou, Xianghua; Song, Feng; Hu, Guodong; Cao, Zanxia; Qu, Yuanxu
2018-01-01
Abstract Long non-coding RNAs (lncRNAs) play important functional roles in various biological processes. Early databases were utilized to deposit all lncRNA candidates produced by high-throughput experimental and/or computational techniques to facilitate classification, assessment and validation. As more lncRNAs are validated by low-throughput experiments, several databases were established for experimentally validated lncRNAs. However, these databases are small in scale (with a few hundreds of lncRNAs only) and specific in their focuses (plants, diseases or interactions). Thus, it is highly desirable to have a comprehensive dataset for experimentally validated lncRNAs as a central repository for all of their structures, functions and phenotypes. Here, we established EVLncRNAs by curating lncRNAs validated by low-throughput experiments (up to 1 May 2016) and integrating specific databases (lncRNAdb, LncRANDisease, Lnc2Cancer and PLNIncRBase) with additional functional and disease-specific information not covered previously. The current version of EVLncRNAs contains 1543 lncRNAs from 77 species that is 2.9 times larger than the current largest database for experimentally validated lncRNAs. Seventy-four percent lncRNA entries are partially or completely new, comparing to all existing experimentally validated databases. The established database allows users to browse, search and download as well as to submit experimentally validated lncRNAs. The database is available at http://biophy.dzu.edu.cn/EVLncRNAs. PMID:28985416
Abdelhafiz, Ali A; Ganzoury, Mohamed A; Amer, Ahmad W; Faiad, Azza A; Khalifa, Ahmed M; AlQaradawi, Siham Y; El-Sayed, Mostafa A; Alamgir, Faisal M; Allam, Nageh K
2018-04-18
Understanding the nature of interfacial defects of materials is a critical undertaking for the design of high-performance hybrid electrodes for photocatalysis applications. Theoretical and computational endeavors to achieve this have touched boundaries far ahead of their experimental counterparts. However, to achieve any industrial benefit out of such studies, experimental validation needs to be systematically undertaken. In this sense, we present herein experimental insights into the synergistic relationship between the lattice position and oxidation state of tungsten ions inside a TiO2 lattice, and the respective nature of the created defect states. Consequently, a roadmap to tune the defect states in anodically-fabricated, ultrathin-walled W-doped TiO2 nanotubes is proposed. Annealing the nanotubes in different gas streams enabled the engineering of defects in such structures, as confirmed by XRD and XPS measurements. While annealing under hydrogen stream resulted in the formation of abundant Wn+ (n < 6) ions at the interstitial sites of the TiO2 lattice, oxygen- and air-annealing induced W6+ ions at substitutional sites. EIS and Mott-Schottky analyses indicated the formation of deep-natured trap states in the hydrogen-annealed samples, and predominantly shallow donating defect states in the oxygen- and air-annealed samples. Consequently, the photocatalytic performance of the latter was significantly higher than those of the hydrogen-annealed counterparts. Upon increasing the W content, photoelectrochemical performance deteriorated due to the formation of WO3 crystallites that hindered charge transfer through the photoanode, as evident from the structural and chemical characterization. To this end, this study validates the previous theoretical predictions on the detrimental effect of interstitial W ions. In addition, it sheds light on the importance of defect states and their nature for tuning the photoelectrochemical performance of the investigated materials.
2013-01-01
Background Understanding the process of amino acid fermentation as a comprehensive system is a challenging task. Previously, we developed a literature-based dynamic simulation model, which included transcriptional regulation, transcription, translation, and enzymatic reactions related to glycolysis, the pentose phosphate pathway, the tricarboxylic acid (TCA) cycle, and the anaplerotic pathway of Escherichia coli. During simulation, cell growth was defined such as to reproduce the experimental cell growth profile of fed-batch cultivation in jar fermenters. However, to confirm the biological appropriateness of our model, sensitivity analysis and experimental validation were required. Results We constructed an l-glutamic acid fermentation simulation model by removing sucAB, a gene encoding α-ketoglutarate dehydrogenase. We then performed systematic sensitivity analysis for l-glutamic acid production; the results of this process corresponded with previous experimental data regarding l-glutamic acid fermentation. Furthermore, it allowed us to predicted the possibility that accumulation of 3-phosphoglycerate in the cell would regulate the carbon flux into the TCA cycle and lead to an increase in the yield of l-glutamic acid via fermentation. We validated this hypothesis through a fermentation experiment involving a model l-glutamic acid-production strain, E. coli MG1655 ΔsucA in which the phosphoglycerate kinase gene had been amplified to cause accumulation of 3-phosphoglycerate. The observed increase in l-glutamic acid production verified the biologically meaningful predictive power of our dynamic metabolic simulation model. Conclusions In this study, dynamic simulation using a literature-based model was shown to be useful for elucidating the precise mechanisms involved in fermentation processes inside the cell. Further exhaustive sensitivity analysis will facilitate identification of novel factors involved in the metabolic regulation of amino acid fermentation. PMID:24053676
Nishio, Yousuke; Ogishima, Soichi; Ichikawa, Masao; Yamada, Yohei; Usuda, Yoshihiro; Masuda, Tadashi; Tanaka, Hiroshi
2013-09-22
Understanding the process of amino acid fermentation as a comprehensive system is a challenging task. Previously, we developed a literature-based dynamic simulation model, which included transcriptional regulation, transcription, translation, and enzymatic reactions related to glycolysis, the pentose phosphate pathway, the tricarboxylic acid (TCA) cycle, and the anaplerotic pathway of Escherichia coli. During simulation, cell growth was defined such as to reproduce the experimental cell growth profile of fed-batch cultivation in jar fermenters. However, to confirm the biological appropriateness of our model, sensitivity analysis and experimental validation were required. We constructed an L-glutamic acid fermentation simulation model by removing sucAB, a gene encoding α-ketoglutarate dehydrogenase. We then performed systematic sensitivity analysis for L-glutamic acid production; the results of this process corresponded with previous experimental data regarding L-glutamic acid fermentation. Furthermore, it allowed us to predicted the possibility that accumulation of 3-phosphoglycerate in the cell would regulate the carbon flux into the TCA cycle and lead to an increase in the yield of L-glutamic acid via fermentation. We validated this hypothesis through a fermentation experiment involving a model L-glutamic acid-production strain, E. coli MG1655 ΔsucA in which the phosphoglycerate kinase gene had been amplified to cause accumulation of 3-phosphoglycerate. The observed increase in L-glutamic acid production verified the biologically meaningful predictive power of our dynamic metabolic simulation model. In this study, dynamic simulation using a literature-based model was shown to be useful for elucidating the precise mechanisms involved in fermentation processes inside the cell. Further exhaustive sensitivity analysis will facilitate identification of novel factors involved in the metabolic regulation of amino acid fermentation.
Computational Biorheology of Human Blood Flow in Health and Disease
Fedosov, Dmitry A.; Dao, Ming; Karniadakis, George Em; Suresh, Subra
2014-01-01
Hematologic disorders arising from infectious diseases, hereditary factors and environmental influences can lead to, and can be influenced by, significant changes in the shape, mechanical and physical properties of red blood cells (RBCs), and the biorheology of blood flow. Hence, modeling of hematologic disorders should take into account the multiphase nature of blood flow, especially in arterioles and capillaries. We present here an overview of a general computational framework based on dissipative particle dynamics (DPD) which has broad applicability in cell biophysics with implications for diagnostics, therapeutics and drug efficacy assessments for a wide variety of human diseases. This computational approach, validated by independent experimental results, is capable of modeling the biorheology of whole blood and its individual components during blood flow so as to investigate cell mechanistic processes in health and disease. DPD is a Lagrangian method that can be derived from systematic coarse-graining of molecular dynamics but can scale efficiently up to arterioles and can also be used to model RBCs down to the spectrin level. We start from experimental measurements of a single RBC to extract the relevant biophysical parameters, using single-cell measurements involving such methods as optical tweezers, atomic force microscopy and micropipette aspiration, and cell-population experiments involving microfluidic devices. We then use these validated RBC models to predict the biorheological behavior of whole blood in healthy or pathological states, and compare the simulations with experimental results involving apparent viscosity and other relevant parameters. While the approach discussed here is sufficiently general to address a broad spectrum of hematologic disorders including certain types of cancer, this paper specifically deals with results obtained using this computational framework for blood flow in malaria and sickle cell anemia. PMID:24419829
McOmish, Caitlin E; Burrows, Emma L; Hannan, Anthony J
2014-10-01
Psychiatric disorders affect a substantial proportion of the population worldwide. This high prevalence, combined with the chronicity of the disorders and the major social and economic impacts, creates a significant burden. As a result, an important priority is the development of novel and effective interventional strategies for reducing incidence rates and improving outcomes. This review explores the progress that has been made to date in establishing valid animal models of psychiatric disorders, while beginning to unravel the complex factors that may be contributing to the limitations of current methodological approaches. We propose some approaches for optimizing the validity of animal models and developing effective interventions. We use schizophrenia and autism spectrum disorders as examples of disorders for which development of valid preclinical models, and fully effective therapeutics, have proven particularly challenging. However, the conclusions have relevance to various other psychiatric conditions, including depression, anxiety and bipolar disorders. We address the key aspects of construct, face and predictive validity in animal models, incorporating genetic and environmental factors. Our understanding of psychiatric disorders is accelerating exponentially, revealing extraordinary levels of genetic complexity, heterogeneity and pleiotropy. The environmental factors contributing to individual, and multiple, disorders also exhibit breathtaking complexity, requiring systematic analysis to experimentally explore the environmental mediators and modulators which constitute the 'envirome' of each psychiatric disorder. Ultimately, genetic and environmental factors need to be integrated via animal models incorporating the spatiotemporal complexity of gene-environment interactions and experience-dependent plasticity, thus better recapitulating the dynamic nature of brain development, function and dysfunction. © 2014 The British Pharmacological Society.
Lu, Liang-Xing; Wang, Ying-Min; Srinivasan, Bharathi Madurai; Asbahi, Mohamed; Yang, Joel K W; Zhang, Yong-Wei
2016-09-01
We perform systematic two-dimensional energetic analysis to study the stability of various nanostructures formed by dewetting solid films deposited on patterned substrates. Our analytical results show that by controlling system parameters such as the substrate surface pattern, film thickness and wetting angle, a variety of equilibrium nanostructures can be obtained. Phase diagrams are presented to show the complex relations between these system parameters and various nanostructure morphologies. We further carry out both phase field simulations and dewetting experiments to validate the analytically derived phase diagrams. Good agreements between the results from our energetic analyses and those from our phase field simulations and experiments verify our analysis. Hence, the phase diagrams presented here provide guidelines for using solid-state dewetting as a tool to achieve various nanostructures.
An Experimental Test of the Concentration Index
Bleichrodt, Han; Rohde, Kirsten I.M.; Van Ourti, Tom
2016-01-01
The concentration index is widely used to measure income-related inequality in health. No insight exists, however, whether the concentration index connects with people's preferences about distributions of income and health and whether a reduction in the concentration index reflects an increase in social welfare. We explored this question by testing the central assumption underlying the concentration index and found that it was systematically violated. We also tested the validity of alternative health inequality measures that have been proposed in the literature. Our data showed that decreases in the spread of income and health were considered socially desirable, but decreases in the correlation between income and health not necessarily. Support for a condition implying that the inequality in the distribution of income and in the distribution of health can be considered separately was mixed. PMID:22307035
Control of stacking loads in final waste disposal according to the borehole technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feuser, W.; Barnert, E.; Vijgen, H.
1996-12-01
The semihydrostatic model has been developed in order to assess the mechanical toads acting on heat-generating ILW(Q) and HTGR fuel element waste packages to be emplaced in vertical boreholes according to the borehole technique in underground rock salt formations. For the experimental validation of the theory, laboratory test stands reduced in scale are set up to simulate the bottom section of a repository borehole. A comparison of the measurement results with the data computed by the model, a correlation between the test stand results, and a systematic determination of material-typical crushed salt parameters in a separate research project will servemore » to derive a set of characteristic equations enabling a description of real conditions in a future repository.« less
Identification of terrain cover using the optimum polarimetric classifier
NASA Technical Reports Server (NTRS)
Kong, J. A.; Swartz, A. A.; Yueh, H. A.; Novak, L. M.; Shin, R. T.
1988-01-01
A systematic approach for the identification of terrain media such as vegetation canopy, forest, and snow-covered fields is developed using the optimum polarimetric classifier. The covariance matrices for various terrain cover are computed from theoretical models of random medium by evaluating the scattering matrix elements. The optimal classification scheme makes use of a quadratic distance measure and is applied to classify a vegetation canopy consisting of both trees and grass. Experimentally measured data are used to validate the classification scheme. Analytical and Monte Carlo simulated classification errors using the fully polarimetric feature vector are compared with classification based on single features which include the phase difference between the VV and HH polarization returns. It is shown that the full polarimetric results are optimal and provide better classification performance than single feature measurements.
Predicting the Coupling Properties of Axially-Textured Materials.
Fuentes-Cobas, Luis E; Muñoz-Romero, Alejandro; Montero-Cabrera, María E; Fuentes-Montero, Luis; Fuentes-Montero, María E
2013-10-30
A description of methods and computer programs for the prediction of "coupling properties" in axially-textured polycrystals is presented. Starting data are the single-crystal properties, texture and stereography. The validity and proper protocols for applying the Voigt, Reuss and Hill approximations to estimate coupling properties effective values is analyzed. Working algorithms for predicting mentioned averages are given. Bunge's symmetrized spherical harmonics expansion of orientation distribution functions, inverse pole figures and (single and polycrystals) physical properties is applied in all stages of the proposed methodology. The established mathematical route has been systematized in a working computer program. The discussion of piezoelectricity in a representative textured ferro-piezoelectric ceramic illustrates the application of the proposed methodology. Polycrystal coupling properties, predicted by the suggested route, are fairly close to experimentally measured ones.
Predicting the Coupling Properties of Axially-Textured Materials
Fuentes-Cobas, Luis E.; Muñoz-Romero, Alejandro; Montero-Cabrera, María E.; Fuentes-Montero, Luis; Fuentes-Montero, María E.
2013-01-01
A description of methods and computer programs for the prediction of “coupling properties” in axially-textured polycrystals is presented. Starting data are the single-crystal properties, texture and stereography. The validity and proper protocols for applying the Voigt, Reuss and Hill approximations to estimate coupling properties effective values is analyzed. Working algorithms for predicting mentioned averages are given. Bunge’s symmetrized spherical harmonics expansion of orientation distribution functions, inverse pole figures and (single and polycrystals) physical properties is applied in all stages of the proposed methodology. The established mathematical route has been systematized in a working computer program. The discussion of piezoelectricity in a representative textured ferro-piezoelectric ceramic illustrates the application of the proposed methodology. Polycrystal coupling properties, predicted by the suggested route, are fairly close to experimentally measured ones. PMID:28788370
Flexible Piezoelectric Energy Harvesting from Mouse Click Motions
Cha, Youngsu; Hong, Jin; Lee, Jaemin; Park, Jung-Min; Kim, Keehoon
2016-01-01
In this paper, we study energy harvesting from the mouse click motions of a robot finger and a human index finger using a piezoelectric material. The feasibility of energy harvesting from mouse click motions is experimentally and theoretically assessed. The fingers wear a glove with a pocket for including the piezoelectric material. We model the energy harvesting system through the inverse kinematic framework of parallel joints in a finger and the electromechanical coupling equations of the piezoelectric material. The model is validated through energy harvesting experiments in the robot and human fingers with the systematically varying load resistance. We find that energy harvesting is maximized at the matched load resistance to the impedance of the piezoelectric material, and the harvested energy level is tens of nJ. PMID:27399705
Validation of a multi-phase plant-wide model for the description of the aeration process in a WWTP.
Lizarralde, I; Fernández-Arévalo, T; Beltrán, S; Ayesa, E; Grau, P
2018-02-01
This paper introduces a new mathematical model built under the PC-PWM methodology to describe the aeration process in a full-scale WWTP. This methodology enables a systematic and rigorous incorporation of chemical and physico-chemical transformations into biochemical process models, particularly for the description of liquid-gas transfer to describe the aeration process. The mathematical model constructed is able to reproduce biological COD and nitrogen removal, liquid-gas transfer and chemical reactions. The capability of the model to describe the liquid-gas mass transfer has been tested by comparing simulated and experimental results in a full-scale WWTP. Finally, an exploration by simulation has been undertaken to show the potential of the mathematical model. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dall'Oglio, Immacolata; Mascolo, Rachele; Gawronski, Orsola; Tiozzo, Emanuela; Portanova, Anna; Ragni, Angela; Alvaro, Rosaria; Rocco, Gennaro; Latour, Jos M
2018-03-01
This systematic review synthesised and described instruments measuring parent satisfaction with the increasing standard practice of family-centred care (FCC) in neonatal intensive care units. We evaluated 11 studies published from January 2006 to March 2016: two studies validated a parent satisfaction questionnaire, and nine developed or modified previous questionnaires to use as outcome measures in their local settings. Most instruments were not tested on reliability and validity. Only two validated instruments included all six of the FCC principles and could assess parent satisfaction with FCC in neonatal intensive care units and be considered as outcome indicators for further research. ©2017 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
Numerical Analysis of Turbulent Flows in Channels of Complex Geometry
NASA Astrophysics Data System (ADS)
Farbos De Luzan, Charles
The current study proposes to follow a systematic validated approach to applied fluid mechanics problems in order to evaluate the ability of different computational fluid dynamics (CFD) to be a relevant design tool. This systematic approach involves different operations such as grid sensitivity analyses, turbulence models comparison and appropriate wall treatments, in order to define case-specific optimal parameters for industrial applications. A validation effort is performed on each study, with particle image velocimetry (PIV) experimental results as the validating metric. The first part of the dissertation lays down the principles of validation, and presents the details of a grid sensitivity analysis, as well as a turbulence models benchmark. The models are available in commercial solvers, and in most cases the default values of the equations constants are retained. The validation experimental data is taken with a hot wire, and has served as a reference to validate multiple turbulence models for turbulent flows in channels. In a second part, the study of a coaxial piping system will compare a set of different steady Reynolds-Averaged Navier Stokes (RANS) turbulence models, namely the one equation model Spalart-Almaras, and two-equation-models standard k-epsilon, k-epsilon realizable, k-epsilon RNG, standard k-omega, k-omega SST, and transition SST. The geometry of interest involves a transition from an annulus into a larger one, where highly turbulent phenomena occur, such as recirculation and jet impingement. Based on a set of constraints that are defined in the analysis, a chosen model will be tested on new designs in order to evaluate their performance. The third part of this dissertation will address the steady-state flow patterns in a Viscosity-Sensitive Fluidic Diode (VSFD). This device is used in a fluidics application, and its originality lies in the fact that it does not require a control fluid in order to operate. This section will discuss the treatment of viscosity in a steady RANS model, and will provide observations that will support the design of an improved device. The fourth part of the document will address the unsteady-state flow patterns in a Bi-Stable Valve (BSV) activated by fluids of different viscosities. This device involves a bi-stable behavior, referred to as the switch, which actuation depends on the viscosity of the fluid. This section will discuss the dependence of initial conditions in unsteady flow simulations, and will provide observations that will support the design of an improved device. In a fifth and final part, compressible large eddy simulation is employed to numerically investigate the laryngeal flow. Symmetric static models of the human larynx with a divergent glottis are considered, with the presence of False Vocal Folds (FVFs). The FVFs are a main factor affecting the closure of the TVFs. The direct link between the FVFs geometry and the motion of the TVFs, and by extension to the voice production, is of interest for medical applications as well as future research works. The presence of the FVFs also changes the dominant frequencies in the velocity and pressure spectra.
Kwak, Namhee; Swan, Joshua T; Thompson-Moore, Nathaniel; Liebl, Michael G
2016-08-01
This study aims to develop a systematic search strategy and test its validity and reliability in terms of identifying projects published in peer-reviewed journals as reported by residency graduates through an online survey. This study was a prospective blind comparison to a reference standard. Pharmacy residency projects conducted at the study institution between 2001 and 2012 were included. A step-wise, systematic procedure containing up to 8 search strategies in PubMed and EMBASE for each project was created using the names of authors and abstract keywords. In order to further maximize sensitivity, complex phrases with multiple variations were truncated to the root word. Validity was assessed by obtaining information on publications from an online survey deployed to residency graduates. The search strategy identified 13 publications (93% sensitivity, 100% specificity, and 99% accuracy). Both methods identified a similar proportion achieving publication (19.7% search strategy vs 21.2% survey, P = 1.00). Reliability of the search strategy was affirmed by the perfect agreement between 2 investigators (k = 1.00). This systematic search strategy demonstrated a high sensitivity, specificity, and accuracy for identifying publications resulting from pharmacy residency projects using information available in residency conference abstracts. © The Author(s) 2015.
McLean, Rachael M; Farmer, Victoria L; Nettleton, Alice; Cameron, Claire M; Cook, Nancy R; Campbell, Norman R C
2017-12-01
Food frequency questionnaires (FFQs) are often used to assess dietary sodium intake, although 24-hour urinary excretion is the most accurate measure of intake. The authors conducted a systematic review to investigate whether FFQs are a reliable and valid way of measuring usual dietary sodium intake. Results from 18 studies are described in this review, including 16 validation studies. The methods of study design and analysis varied widely with respect to FFQ instrument, number of 24-hour urine collections collected per participant, methods used to assess completeness of urine collections, and statistical analysis. Overall, there was poor agreement between estimates from FFQ and 24-hour urine. The authors suggest a framework for validation and reporting based on a consensus statement (2004), and recommend that all FFQs used to estimate dietary sodium intake undergo validation against multiple 24-hour urine collections. ©2017 Wiley Periodicals, Inc.
Langhaug, Lisa F.; Sherr, Lorraine; Cowan, Frances M
2012-01-01
Summary Objectives To systematically review comparative research from developing countries on the effects of questionnaire delivery mode. Methods We searched Medline, EMbase and PsychINFO and ISSTDR conference proceedings. Randomized-controlled trials and quasi-experimental studies were included if they compared two or more questionnaire delivery modes, were conducted in a developing country, reported on sexual behaviours, and occurred after 1980. Results 28 articles reporting on 26 studies met the inclusion criteria. Heterogeneity of reported trial outcomes between studies made it inappropriate to combine trial outcomes. 18 studies compared audio computer-assisted survey instruments (ACASI) or its derivatives (PDA or CAPI) against another self-administered questionnaires, face-to-face interviews, or random response technique. Despite wide variation in geography and populations sampled, there was strong evidence that computer-assisted interviews lowered item-response rates and raised rates of reporting sensitive behaviours. ACASI also improved data entry quality. A wide range of sexual behaviours were reported including vaginal, oral, anal and/or forced sex, age of sexual debut, condom use at first and/or last sex. Validation of self-reports using biomarkers was rare. Conclusions These data reaffirm that questionnaire delivery modes do affect self-reported sexual ehaviours and that use of ACASI can significantly reduce reporting bias. Its acceptability and feasibility in developing country settings should encourage researchers to consider its use when conduct ing sexual health research. Triangulation of self-reported data using biomarkers is recommended. Standardising sexual behaviour measures would allow for meta-analysis. PMID:20409291
Young Children and Tablets: A Systematic Review of Effects on Learning and Development
ERIC Educational Resources Information Center
Herodotou, C.
2018-01-01
Mobile applications are popular among young children, yet there is a dearth of studies examining their impact on learning and development. A systematic review identified 19 studies reporting learning effects on children 2 to 5 years old. The number of children participating in experimental, quasi-experimental, or mixed-method studies was 862 and…
Animal-Assisted Therapies for Youth with or at Risk for Mental Health Problems: A Systematic Review
ERIC Educational Resources Information Center
Hoagwood, Kimberly Eaton; Acri, Mary; Morrissey, Meghan; Peth-Pierce, Robin
2017-01-01
To systematically review experimental evidence regarding animal-assisted therapies (AAT) for children or adolescents with or at risk for mental health conditions, we reviewed all experimental AAT studies published between 2000-2015, and compared studies by animal type, intervention, and outcomes. Studies were included if used therapeutically for…
Martin, RobRoy L.
2012-01-01
Purpose/Background: The purpose of this study was to systematically review the literature for functional performance tests with evidence of reliability and validity that could be used for a young, athletic population with hip dysfunction. Methods: A search of PubMed and SPORTDiscus databases were performed to identify movement, balance, hop/jump, or agility functional performance tests from the current peer-reviewed literature used to assess function of the hip in young, athletic subjects. Results: The single-leg stance, deep squat, single-leg squat, and star excursion balance tests (SEBT) demonstrated evidence of validity and normative data for score interpretation. The single-leg stance test and SEBT have evidence of validity with association to hip abductor function. The deep squat test demonstrated evidence as a functional performance test for evaluating femoroacetabular impingement. Hop/Jump tests and agility tests have no reported evidence of reliability or validity in a population of subjects with hip pathology. Conclusions: Use of functional performance tests in the assessment of hip dysfunction has not been well established in the current literature. Diminished squat depth and provocation of pain during the single-leg balance test have been associated with patients diagnosed with FAI and gluteal tendinopathy, respectively. The SEBT and single-leg squat tests provided evidence of convergent validity through an analysis of kinematics and muscle function in normal subjects. Reliability of functional performance tests have not been established on patients with hip dysfunction. Further study is needed to establish reliability and validity of functional performance tests that can be used in a young, athletic population with hip dysfunction. Level of Evidence: 2b (Systematic Review of Literature) PMID:22893860
Gagné, Myriam; Boulet, Louis-Philippe; Pérez, Norma; Moisan, Jocelyne
2018-04-30
To systematically identify the measurement properties of patient-reported outcome instruments (PROs) that evaluate adherence to inhaled maintenance medication in adults with asthma. We conducted a systematic review of six databases. Two reviewers independently included studies on the measurement properties of PROs that evaluated adherence in asthmatic participants aged ≥18 years. Based on the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN), the reviewers (1) extracted data on internal consistency, reliability, measurement error, content validity, structural validity, hypotheses testing, cross-cultural validity, criterion validity, and responsiveness; (2) assessed the methodological quality of the included studies; (3) assessed the quality of the measurement properties (positive or negative); and (4) summarised the level of evidence (limited, moderate, or strong). We screened 6,068 records and included 15 studies (14 PROs). No studies evaluated measurement error or responsiveness. Based on methodological and measurement property quality assessments, we found limited positive evidence of: (a) internal consistency of the Adherence Questionnaire, Refined Medication Adherence Reason Scale (MAR-Scale), Medication Adherence Report Scale for Asthma (MARS-A), and Test of the Adherence to Inhalers (TAI); (b) reliability of the TAI; and (c) structural validity of the Adherence Questionnaire, MAR-Scale, MARS-A, and TAI. We also found limited negative evidence of: (d) hypotheses testing of Adherence Questionnaire; (e) reliability of the MARS-A; and (f) criterion validity of the MARS-A and TAI. Our results highlighted the need to conduct further high-quality studies that will positively evaluate the reliability, validity, and responsiveness of the available PROs. This article is protected by copyright. All rights reserved.
Johnston, Marie; Dixon, Diane; Hart, Jo; Glidewell, Liz; Schröder, Carin; Pollard, Beth
2014-05-01
In studies involving theoretical constructs, it is important that measures have good content validity and that there is not contamination of measures by content from other constructs. While reliability and construct validity are routinely reported, to date, there has not been a satisfactory, transparent, and systematic method of assessing and reporting content validity. In this paper, we describe a methodology of discriminant content validity (DCV) and illustrate its application in three studies. Discriminant content validity involves six steps: construct definition, item selection, judge identification, judgement format, single-sample test of content validity, and assessment of discriminant items. In three studies, these steps were applied to a measure of illness perceptions (IPQ-R) and control cognitions. The IPQ-R performed well with most items being purely related to their target construct, although timeline and consequences had small problems. By contrast, the study of control cognitions identified problems in measuring constructs independently. In the final study, direct estimation response formats for theory of planned behaviour constructs were found to have as good DCV as Likert format. The DCV method allowed quantitative assessment of each item and can therefore inform the content validity of the measures assessed. The methods can be applied to assess content validity before or after collecting data to select the appropriate items to measure theoretical constructs. Further, the data reported for each item in Appendix S1 can be used in item or measure selection. Statement of contribution What is already known on this subject? There are agreed methods of assessing and reporting construct validity of measures of theoretical constructs, but not their content validity. Content validity is rarely reported in a systematic and transparent manner. What does this study add? The paper proposes discriminant content validity (DCV), a systematic and transparent method of assessing and reporting whether items assess the intended theoretical construct and only that construct. In three studies, DCV was applied to measures of illness perceptions, control cognitions, and theory of planned behaviour response formats. Appendix S1 gives content validity indices for each item of each questionnaire investigated. Discriminant content validity is ideally applied while the measure is being developed, before using to measure the construct(s), but can also be applied after using a measure. © 2014 The British Psychological Society.
Johansen, Ilona; Andreassen, Rune
2014-12-23
MicroRNAs (miRNAs) are an abundant class of endogenous small RNA molecules that downregulate gene expression at the post-transcriptional level. They play important roles by regulating genes that control multiple biological processes, and recent years there has been an increased interest in studying miRNA genes and miRNA gene expression. The most common method applied to study gene expression of single genes is quantitative PCR (qPCR). However, before expression of mature miRNAs can be studied robust qPCR methods (miRNA-qPCR) must be developed. This includes identification and validation of suitable reference genes. We are particularly interested in Atlantic salmon (Salmo salar). This is an economically important aquaculture species, but no reference genes dedicated for use in miRNA-qPCR methods has been validated for this species. Our aim was, therefore, to identify suitable reference genes for miRNA-qPCR methods in Salmo salar. We used a systematic approach where we utilized similar studies in other species, some biological criteria, results from deep sequencing of small RNAs and, finally, experimental validation of candidate reference genes by qPCR to identify the most suitable reference genes. Ssa-miR-25-3p was identified as most suitable single reference gene. The best combinations of two reference genes were ssa-miR-25-3p and ssa-miR-455-5p. These two genes were constitutively and stably expressed across many different tissues. Furthermore, infectious salmon anaemia did not seem to affect their expression levels. These genes were amplified with high specificity, good efficiency and the qPCR assays showed a good linearity when applying a simple cybergreen miRNA-PCR method using miRNA gene specific forward primers. We have identified suitable reference genes for miRNA-qPCR in Atlantic salmon. These results will greatly facilitate further studies on miRNA genes in this species. The reference genes identified are conserved genes that are identical in their mature sequence in many aquaculture species. Therefore, they may also be suitable as reference genes in other teleosts. Finally, the systematic approach used in our study successfully identified suitable reference genes, suggesting that this may be a useful strategy to apply in similar validation studies in other aquaculture species.
Current Status of Simulation-based Training Tools in Orthopedic Surgery: A Systematic Review.
Morgan, Michael; Aydin, Abdullatif; Salih, Alan; Robati, Shibby; Ahmed, Kamran
To conduct a systematic review of orthopedic training and assessment simulators with reference to their level of evidence (LoE) and level of recommendation. Medline and EMBASE library databases were searched for English language articles published between 1980 and 2016, describing orthopedic simulators or validation studies of these models. All studies were assessed for LoE, and each model was subsequently awarded a level of recommendation using a modified Oxford Centre for Evidence-Based Medicine classification, adapted for education. A total of 76 articles describing orthopedic simulators met the inclusion criteria, 47 of which described at least 1 validation study. The most commonly identified models (n = 34) and validation studies (n = 26) were for knee arthroscopy. Construct validation was the most frequent validation study attempted by authors. In all, 62% (47 of 76) of the simulator studies described arthroscopy simulators, which also contained validation studies with the highest LoE. Orthopedic simulators are increasingly being subjected to validation studies, although the LoE of such studies generally remain low. There remains a lack of focus on nontechnical skills and on cost analyses of orthopedic simulators. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
A signal detection-item response theory model for evaluating neuropsychological measures.
Thomas, Michael L; Brown, Gregory G; Gur, Ruben C; Moore, Tyler M; Patt, Virginie M; Risbrough, Victoria B; Baker, Dewleen G
2018-02-05
Models from signal detection theory are commonly used to score neuropsychological test data, especially tests of recognition memory. Here we show that certain item response theory models can be formulated as signal detection theory models, thus linking two complementary but distinct methodologies. We then use the approach to evaluate the validity (construct representation) of commonly used research measures, demonstrate the impact of conditional error on neuropsychological outcomes, and evaluate measurement bias. Signal detection-item response theory (SD-IRT) models were fitted to recognition memory data for words, faces, and objects. The sample consisted of U.S. Infantry Marines and Navy Corpsmen participating in the Marine Resiliency Study. Data comprised item responses to the Penn Face Memory Test (PFMT; N = 1,338), Penn Word Memory Test (PWMT; N = 1,331), and Visual Object Learning Test (VOLT; N = 1,249), and self-report of past head injury with loss of consciousness. SD-IRT models adequately fitted recognition memory item data across all modalities. Error varied systematically with ability estimates, and distributions of residuals from the regression of memory discrimination onto self-report of past head injury were positively skewed towards regions of larger measurement error. Analyses of differential item functioning revealed little evidence of systematic bias by level of education. SD-IRT models benefit from the measurement rigor of item response theory-which permits the modeling of item difficulty and examinee ability-and from signal detection theory-which provides an interpretive framework encompassing the experimentally validated constructs of memory discrimination and response bias. We used this approach to validate the construct representation of commonly used research measures and to demonstrate how nonoptimized item parameters can lead to erroneous conclusions when interpreting neuropsychological test data. Future work might include the development of computerized adaptive tests and integration with mixture and random-effects models.
Mantzoukas, Stefanos
2009-04-01
Evidence-based practice has become an imperative for efficient, effective and safe practice. Furthermore, evidences emerging from published research are considered as valid knowledge sources to guiding practice. The aim of this paper is to review all research articles published in the top 10 general nursing journals for the years 2000-2006 to identify the methodologies used, the types of evidence these studies produced and the issues upon which they endeavored. Quantitative content analysis was implemented to study all published research papers of the top 10 general nursing journals for the years 2000-2006. The top 10 general nursing journals were included in the study. The abstracts of all research articles were analysed with regards the methodologies of enquiry, the types of evidence produced and the issues of study they endeavored upon. Percentages were developed as to enable conclusions to be drawn. The results for the category methodologies used were 7% experimental, 6% quasi-experimental, 39% non-experimental, 2% ethnographical studies, 7% phenomenological, 4% grounded theory, 1% action research, 1% case study, 15% unspecified, 5.5% other, 0.5% meta-synthesis, 2% meta-analysis, 5% literature reviews and 3% secondary analysis. For the category types of evidence were 4% hypothesis/theory testing, 11% evaluative, 5% comparative, 2% correlational, 46% descriptive, 5% interpretative and 27% exploratory. For the category issues of study were 45% practice/clinical, 8% educational, 11% professional, 3% spiritual/ethical/metaphysical, 26% health promotion and 7% managerial/policy. Published studies can provide adequate evidences for practice if nursing journals conceptualise evidence emerging from non-experimental and qualitative studies as relevant types of evidences for practice and develop appropriate mechanisms for assessing their validity. Also, nursing journals need to increase and encourage the publication of studies that implement RCT methodology, systematic reviews, meta-synthesis and meta-analysis methodologies. Finally, nursing journals need to encourage more high quality research evidence that derive from interpretative, theory testing and evaluative types of studies that are practice relevant.
Test system stability and natural variability of a Lemna gibba L. bioassay.
Scherr, Claudia; Simon, Meinhard; Spranger, Jörg; Baumgartner, Stephan
2008-09-04
In ecotoxicological and environmental studies Lemna spp. are used as test organisms due to their small size, rapid predominantly vegetative reproduction, easy handling and high sensitivity to various chemicals. However, there is not much information available concerning spatial and temporal stability of experimental set-ups used for Lemna bioassays, though this is essential for interpretation and reliability of results. We therefore investigated stability and natural variability of a Lemna gibba bioassay assessing area-related and frond number-related growth rates under controlled laboratory conditions over about one year. Lemna gibba L. was grown in beakers with Steinberg medium for one week. Area-related and frond number-related growth rates (r(area) and r(num)) were determined with a non-destructive image processing system. To assess inter-experimental stability, 35 independent experiments were performed with 10 beakers each in the course of one year. We observed changes in growth rates by a factor of two over time. These did not correlate well with temperature or relative humidity in the growth chamber. In order to assess intra-experimental stability, we analysed six systematic negative control experiments (nontoxicant tests) with 96 replicate beakers each. Evaluation showed that the chosen experimental set-up was stable and did not produce false positive results. The coefficient of variation was lower for r(area) (2.99%) than for r(num) (4.27%). It is hypothesised that the variations in growth rates over time under controlled conditions are partly due to endogenic periodicities in Lemna gibba. The relevance of these variations for toxicity investigations should be investigated more closely. Area-related growth rate seems to be more precise as non-destructive calculation parameter than number-related growth rate. Furthermore, we propose two new validity criteria for Lemna gibba bioassays: variability of average specific and section-by-section segmented growth rate, complementary to average specific growth rate as the only validity criterion existing in guidelines for duckweed bioassays.
Test System Stability and Natural Variability of a Lemna Gibba L. Bioassay
Scherr, Claudia; Simon, Meinhard; Spranger, Jörg; Baumgartner, Stephan
2008-01-01
Background In ecotoxicological and environmental studies Lemna spp. are used as test organisms due to their small size, rapid predominantly vegetative reproduction, easy handling and high sensitivity to various chemicals. However, there is not much information available concerning spatial and temporal stability of experimental set-ups used for Lemna bioassays, though this is essential for interpretation and reliability of results. We therefore investigated stability and natural variability of a Lemna gibba bioassay assessing area-related and frond number-related growth rates under controlled laboratory conditions over about one year. Methology/Principal Findings Lemna gibba L. was grown in beakers with Steinberg medium for one week. Area-related and frond number-related growth rates (r(area) and r(num)) were determined with a non-destructive image processing system. To assess inter-experimental stability, 35 independent experiments were performed with 10 beakers each in the course of one year. We observed changes in growth rates by a factor of two over time. These did not correlate well with temperature or relative humidity in the growth chamber. In order to assess intra-experimental stability, we analysed six systematic negative control experiments (nontoxicant tests) with 96 replicate beakers each. Evaluation showed that the chosen experimental set-up was stable and did not produce false positive results. The coefficient of variation was lower for r(area) (2.99%) than for r(num) (4.27%). Conclusions/Significance It is hypothesised that the variations in growth rates over time under controlled conditions are partly due to endogenic periodicities in Lemna gibba. The relevance of these variations for toxicity investigations should be investigated more closely. Area-related growth rate seems to be more precise as non-destructive calculation parameter than number-related growth rate. Furthermore, we propose two new validity criteria for Lemna gibba bioassays: variability of average specific and section-by-section segmented growth rate, complementary to average specific growth rate as the only validity criterion existing in guidelines for duckweed bioassays. PMID:18769541
Verification of Internal Dose Calculations.
NASA Astrophysics Data System (ADS)
Aissi, Abdelmadjid
The MIRD internal dose calculations have been in use for more than 15 years, but their accuracy has always been questionable. There have been attempts to verify these calculations; however, these attempts had various shortcomings which kept the question of verification of the MIRD data still unanswered. The purpose of this research was to develop techniques and methods to verify the MIRD calculations in a more systematic and scientific manner. The research consisted of improving a volumetric dosimeter, developing molding techniques, and adapting the Monte Carlo computer code ALGAM to the experimental conditions and vice versa. The organic dosimetric system contained TLD-100 powder and could be shaped to represent human organs. The dosimeter possessed excellent characteristics for the measurement of internal absorbed doses, even in the case of the lungs. The molding techniques are inexpensive and were used in the fabrication of dosimetric and radioactive source organs. The adaptation of the computer program provided useful theoretical data with which the experimental measurements were compared. The experimental data and the theoretical calculations were compared for 6 source organ-7 target organ configurations. The results of the comparison indicated the existence of an agreement between measured and calculated absorbed doses, when taking into consideration the average uncertainty (16%) of the measurements, and the average coefficient of variation (10%) of the Monte Carlo calculations. However, analysis of the data gave also an indication that the Monte Carlo method might overestimate the internal absorbed doses. Even if the overestimate exists, at least it could be said that the use of the MIRD method in internal dosimetry was shown to lead to no unnecessary exposure to radiation that could be caused by underestimating the absorbed dose. The experimental and the theoretical data were also used to test the validity of the Reciprocity Theorem for heterogeneous phantoms, such as the MIRD phantom and its physical representation, Mr. ADAM. The results indicated that the Reciprocity Theorem is valid within an average range of uncertainty of 8%.
Validity threats: overcoming interference with proposed interpretations of assessment data.
Downing, Steven M; Haladyna, Thomas M
2004-03-01
Factors that interfere with the ability to interpret assessment scores or ratings in the proposed manner threaten validity. To be interpreted in a meaningful manner, all assessments in medical education require sound, scientific evidence of validity. The purpose of this essay is to discuss 2 major threats to validity: construct under-representation (CU) and construct-irrelevant variance (CIV). Examples of each type of threat for written, performance and clinical performance examinations are provided. The CU threat to validity refers to undersampling the content domain. Using too few items, cases or clinical performance observations to adequately generalise to the domain represents CU. Variables that systematically (rather than randomly) interfere with the ability to meaningfully interpret scores or ratings represent CIV. Issues such as flawed test items written at inappropriate reading levels or statistically biased questions represent CIV in written tests. For performance examinations, such as standardised patient examinations, flawed cases or cases that are too difficult for student ability contribute CIV to the assessment. For clinical performance data, systematic rater error, such as halo or central tendency error, represents CIV. The term face validity is rejected as representative of any type of legitimate validity evidence, although the fact that the appearance of the assessment may be an important characteristic other than validity is acknowledged. There are multiple threats to validity in all types of assessment in medical education. Methods to eliminate or control validity threats are suggested.
Hanskamp-Sebregts, Mirelle; Zegers, Marieke; Vincent, Charles; van Gurp, Petra J; de Vet, Henrica C W; Wollersheim, Hub
2016-01-01
Objectives Record review is the most used method to quantify patient safety. We systematically reviewed the reliability and validity of adverse event detection with record review. Design A systematic review of the literature. Methods We searched PubMed, EMBASE, CINAHL, PsycINFO and the Cochrane Library and from their inception through February 2015. We included all studies that aimed to describe the reliability and/or validity of record review. Two reviewers conducted data extraction. We pooled κ values (κ) and analysed the differences in subgroups according to number of reviewers, reviewer experience and training level, adjusted for the prevalence of adverse events. Results In 25 studies, the psychometric data of the Global Trigger Tool (GTT) and the Harvard Medical Practice Study (HMPS) were reported and 24 studies were included for statistical pooling. The inter-rater reliability of the GTT and HMPS showed a pooled κ of 0.65 and 0.55, respectively. The inter-rater agreement was statistically significantly higher when the group of reviewers within a study consisted of a maximum five reviewers. We found no studies reporting on the validity of the GTT and HMPS. Conclusions The reliability of record review is moderate to substantial and improved when a small group of reviewers carried out record review. The validity of the record review method has never been evaluated, while clinical data registries, autopsy or direct observations of patient care are potential reference methods that can be used to test concurrent validity. PMID:27550650
Dacombe, Peter Jonathan; Amirfeyz, Rouin; Davis, Tim
2016-03-01
Patient-reported outcome measures (PROMs) are important tools for assessing outcomes following injuries to the hand and wrist. Many commonly used PROMs have no evidence of reliability, validity, and responsiveness in a hand and wrist trauma population. This systematic review examines the PROMs used in the assessment of hand and wrist trauma patients, and the evidence for reliability, validity, and responsiveness of each measure in this population. A systematic review of Pubmed, Medline, and CINAHL searching for randomized controlled trials of patients with traumatic injuries to the hand and wrist was carried out to identify the PROMs. For each identified PROM, evidence of reliability, validity, and responsiveness was identified using a further systematic review of the Pubmed, Medline, CINAHL, and reverse citation trail audit procedure. The PROM used most often was the Disabilities of the Arm, Shoulder and Hand (DASH) questionnaire; the Patient-Rated Wrist Evaluation (PRWE), Gartland and Werley score, Michigan Hand Outcomes score, Mayo Wrist Score, and Short Form 36 were also commonly used. Only the DASH and PRWE have evidence of reliability, validity, and responsiveness in patients with traumatic injuries to the hand and wrist; other measures either have incomplete evidence or evidence gathered in a nontraumatic population. The DASH and PRWE both have evidence of reliability, validity, and responsiveness in a hand and wrist trauma population. Other PROMs used to assess hand and wrist trauma patients do not. This should be considered when selecting a PROM for patients with traumatic hand and wrist pathology.
Multidimensional measures validated for home health needs of older persons: A systematic review.
de Rossi Figueiredo, Daniela; Paes, Lucilene Gama; Warmling, Alessandra Martins; Erdmann, Alacoque Lorenzini; de Mello, Ana Lúcia Schaefer Ferreira
2018-01-01
To conduct a systematic review of the literature on valid and reliable multidimensional instruments to assess home health needs of older persons. Systematic review. Electronic databases, PubMed/Medline, Web of Science, Scopus, Cumulative Index to Nursing and Allied Health Literature, Scientific Electronic Library Online and the Latin American and Caribbean Health Sciences Information. All English, Portuguese and Spanish literature which included studies of reliability and validity of instruments that assessed at least two dimensions: physical, psychological, social support and functional independence, self-rated health behaviors and contextual environment and if such instruments proposed interventions after evaluation and/or monitoring changes over a period of time. Older persons aged 60 years or older. Of the 2397 studies identified, 32 were considered eligible. Two-thirds of the instruments proposed the physical, psychological, social support and functional independence dimensions. Inter-observer and intra-observer reliability and internal consistency values were 0.7 or above. More than two-thirds of the studies included validity (n=26) and more than one validity was tested in 15% (n=4) of these. Only 7% (n=2) proposed interventions after evaluation and/or monitoring changes over a period of time. Although the multidimensional assessment was performed, and the reliability values of the reviewed studies were satisfactory, different validity tests were not present in several studies. A gap at the instrument conception was observed related to interventions after evaluation and/or monitoring changes over a period of time. Further studies with this purpose are necessary for home health needs of the older persons. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wagner, David R.; Holmgren, Per; Skoglund, Nils; Broström, Markus
2018-06-01
The design and validation of a newly commissioned entrained flow reactor is described in the present paper. The reactor was designed for advanced studies of fuel conversion and ash formation in powder flames, and the capabilities of the reactor were experimentally validated using two different solid biomass fuels. The drop tube geometry was equipped with a flat flame burner to heat and support the powder flame, optical access ports, a particle image velocimetry (PIV) system for in situ conversion monitoring, and probes for extraction of gases and particulate matter. A detailed description of the system is provided based on simulations and measurements, establishing the detailed temperature distribution and gas flow profiles. Mass balance closures of approximately 98% were achieved by combining gas analysis and particle extraction. Biomass fuel particles were successfully tracked using shadow imaging PIV, and the resulting data were used to determine the size, shape, velocity, and residence time of converting particles. Successful extractive sampling of coarse and fine particles during combustion while retaining their morphology was demonstrated, and it opens up for detailed time resolved studies of rapid ash transformation reactions; in the validation experiments, clear and systematic fractionation trends for K, Cl, S, and Si were observed for the two fuels tested. The combination of in situ access, accurate residence time estimations, and precise particle sampling for subsequent chemical analysis allows for a wide range of future studies, with implications and possibilities discussed in the paper.
Drake, David; Kennedy, Rodney; Wallace, Eric
2017-12-01
Researchers and practitioners working in sports medicine and science require valid tests to determine the effectiveness of interventions and enhance understanding of mechanisms underpinning adaptation. Such decision making is influenced by the supportive evidence describing the validity of tests within current research. The objective of this study is to review the validity of lower body isometric multi-joint tests ability to assess muscular strength and determine the current level of supporting evidence. Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines were followed in a systematic fashion to search, assess and synthesize existing literature on this topic. Electronic databases such as Web of Science, CINAHL and PubMed were searched up to 18 March 2015. Potential inclusions were screened against eligibility criteria relating to types of test, measurement instrument, properties of validity assessed and population group and were required to be published in English. The Consensus-based Standards for the Selection of health Measurement Instruments (COSMIN) checklist was used to assess methodological quality and measurement property rating of included studies. Studies rated as fair or better in methodological quality were included in the best evidence synthesis. Fifty-nine studies met the eligibility criteria for quality appraisal. The ten studies that rated fair or better in methodological quality were included in the best evidence synthesis. The most frequently investigated lower body isometric multi-joint tests for validity were the isometric mid-thigh pull and isometric squat. The validity of each of these tests was strong in terms of reliability and construct validity. The evidence for responsiveness of tests was found to be moderate for the isometric squat test and unknown for the isometric mid-thigh pull. No tests using the isometric leg press met the criteria for inclusion in the best evidence synthesis. Researchers and practitioners can use the isometric squat and isometric mid-thigh pull with confidence in terms of reliability and construct validity. Further work to investigate other validity components such as criterion validity, smallest detectable change and responsiveness to resistance exercise interventions may be beneficial to the current level of evidence.
Rakotonarivo, O Sarobidy; Schaafsma, Marije; Hockley, Neal
2016-12-01
While discrete choice experiments (DCEs) are increasingly used in the field of environmental valuation, they remain controversial because of their hypothetical nature and the contested reliability and validity of their results. We systematically reviewed evidence on the validity and reliability of environmental DCEs from the past thirteen years (Jan 2003-February 2016). 107 articles met our inclusion criteria. These studies provide limited and mixed evidence of the reliability and validity of DCE. Valuation results were susceptible to small changes in survey design in 45% of outcomes reporting reliability measures. DCE results were generally consistent with those of other stated preference techniques (convergent validity), but hypothetical bias was common. Evidence supporting theoretical validity (consistency with assumptions of rational choice theory) was limited. In content validity tests, 2-90% of respondents protested against a feature of the survey, and a considerable proportion found DCEs to be incomprehensible or inconsequential (17-40% and 10-62% respectively). DCE remains useful for non-market valuation, but its results should be used with caution. Given the sparse and inconclusive evidence base, we recommend that tests of reliability and validity are more routinely integrated into DCE studies and suggest how this might be achieved. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Tack, Christopher; Shorthouse, Faye; Kass, Lindsy
2018-05-01
To evaluate the current literature via systematic review to ascertain whether amino acids/vitamins provide any influence on musculotendinous healing and if so, by which physiological mechanisms. EBSCO, PubMed, ScienceDirect, Embase Classic/Embase, and MEDLINE were searched using terms including "vitamins," "amino acids," "healing," "muscle," and "tendon." The primary search had 479 citations, of which 466 were excluded predominantly due to nonrandomized design. Randomized human and animal studies investigating all supplement types/forms of administration were included. Critical appraisal of internal validity was assessed using the Cochrane risk of Bias Tool or the Systematic Review Centre for Laboratory Animal Experimentation Risk of Bias Tool for human and animal studies, respectively. Two reviewers performed duel data extraction. Twelve studies met criteria for inclusion: eight examined tendon healing and four examined muscle healing. All studies used animal models, except two human trials using a combined integrator. Narrative synthesis was performed via content analysis of demonstrated statistically significant effects and thematic analysis of proposed physiological mechanisms of intervention. Vitamin C/taurine demonstrated indirect effects on tendon healing through antioxidant activity. Vitamin A/glycine showed direct effects on extracellular matrix tissue synthesis. Vitamin E shows an antiproliferative influence on collagen deposition. Leucine directly influences signaling pathways to promote muscle protein synthesis. Preliminary evidence exists, demonstrating that vitamins and amino acids may facilitate multilevel changes in musculotendinous healing; however, recommendations on clinical utility should be made with caution. All animal studies and one human study showed high risk of bias with moderate interobserver agreement (k = 0.46). Currently, there is limited evidence to support the use of vitamins and amino acids for musculotendinous injury. Both high-quality animal experimentation of the proposed mechanisms confirming the physiological influence of supplementation and human studies evaluating effects on tissue morphology and biochemistry are required before practical application.
Training Tools for Nontechnical Skills for Surgeons-A Systematic Review.
Wood, Thomas Charles; Raison, Nicholas; Haldar, Shreya; Brunckhorst, Oliver; McIlhenny, Craig; Dasgupta, Prokar; Ahmed, Kamran
Development of nontechnical skills for surgeons has been recognized as an important factor in surgical care. Training tools for this specific domain are being created and validated to maximize the surgeon's nontechnical ability. This systematic review aims to outline, address, and recommend these training tools. A full and comprehensive literature search, using a systematic format, was performed on ScienceDirect and PubMed, with data extraction occurring in line with specified inclusion criteria. Systematic review was performed fully at King's College London. A total of 84 heterogeneous articles were used in this review. Further, 23 training tools including scoring systems, training programs, and mixtures of the two for a range of specialities were identified in the literature. Most can be applied to surgery overall, although some tools target specific specialities (such as neurosurgery). Interrater reliability, construct, content, and face validation statuses were variable according to the specific tool in question. Study results pertaining to nontechnical skill training tools have thus far been universally positive, but further studies are required for those more recently developed and less extensively used tools. Recommendations can be made for individual training tools based on their level of validation and for their target audience. Based on the number of studies performed and their status of validity, NOTSS and Oxford NOTECHS II can be considered the gold standard for individual- and team-based nontechnical skills training, respectively, especially when used in conjunction with a training program. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
van der Sluis, Olaf; Vossen, Bart; Geers, Marc
2018-01-01
Metal-elastomer interfacial systems, often encountered in stretchable electronics, demonstrate remarkably high interface fracture toughness values. Evidently, a large gap exists between the rather small adhesion energy levels at the microscopic scale (‘intrinsic adhesion’) and the large measured macroscopic work-of-separation. This energy gap is closed here by unravelling the underlying dissipative mechanisms through a systematic numerical/experimental multi-scale approach. This self-containing contribution collects and reviews previously published results and addresses the remaining open questions by providing new and independent results obtained from an alternative experimental set-up. In particular, the experimental studies on Cu-PDMS (Poly(dimethylsiloxane)) samples conclusively reveal the essential role of fibrillation mechanisms at the micro-meter scale during the metal-elastomer delamination process. The micro-scale numerical analyses on single and multiple fibrils show that the dynamic release of the stored elastic energy by multiple fibril fracture, including the interaction with the adjacent deforming bulk PDMS and its highly nonlinear behaviour, provide a mechanistic understanding of the high work-of-separation. An experimentally validated quantitative relation between the macroscopic work-of-separation and peel front height is established from the simulation results. Finally, it is shown that a micro-mechanically motivated shape of the traction-separation law in cohesive zone models is essential to describe the delamination process in fibrillating metal-elastomer systems in a physically meaningful way. PMID:29393908
Methodological Issues in the Classification of Attention-Related Disorders.
ERIC Educational Resources Information Center
Fletcher, Jack M.; And Others
1991-01-01
For successful classification of children with attention deficit-hyperactivity disorder, major issues include (1) the need for explicit studies of identification criteria; (2) the need for systematic sampling strategies; (3) development of hypothetical classifications; and (4) systematic assessment of reliability and validity of hypothetical…
Network control principles predict neuron function in the Caenorhabditis elegans connectome
Chew, Yee Lian; Walker, Denise S.; Schafer, William R.; Barabási, Albert-László
2017-01-01
Recent studies on the controllability of complex systems offer a powerful mathematical framework to systematically explore the structure-function relationship in biological, social and technological networks1–3. Despite theoretical advances, we lack direct experimental proof of the validity of these widely used control principles. Here we fill this gap by applying a control framework to the connectome of the nematode C. elegans4–6, allowing us to predict the involvement of each C. elegans neuron in locomotor behaviours. We predict that control of the muscles or motor neurons requires twelve neuronal classes, which include neuronal groups previously implicated in locomotion by laser ablation7–13, as well as one previously uncharacterised neuron, PDB. We validate this prediction experimentally, finding that the ablation of PDB leads to a significant loss of dorsoventral polarity in large body bends. Importantly, control principles also allow us to investigate the involvement of individual neurons within each neuronal class. For example, we predict that, within the class of DD motor neurons, only three (DD04, DD05, or DD06) should affect locomotion when ablated individually. This prediction is also confirmed, with single-cell ablations of DD04 or DD05, but not DD02 or DD03, specifically affecting posterior body movements. Our predictions are robust to deletions of weak connections, missing connections, and rewired connections in the current connectome, indicating the potential applicability of this analytical framework to larger and less well-characterised connectomes. PMID:29045391
Georgescu, Alexandra Livia; Kuzmanovic, Bojana; Roth, Daniel; Bente, Gary; Vogeley, Kai
2014-01-01
High-functioning autism (HFA) is a neurodevelopmental disorder, which is characterized by life-long socio-communicative impairments on the one hand and preserved verbal and general learning and memory abilities on the other. One of the areas where particular difficulties are observable is the understanding of non-verbal communication cues. Thus, investigating the underlying psychological processes and neural mechanisms of non-verbal communication in HFA allows a better understanding of this disorder, and potentially enables the development of more efficient forms of psychotherapy and trainings. However, the research on non-verbal information processing in HFA faces several methodological challenges. The use of virtual characters (VCs) helps to overcome such challenges by enabling an ecologically valid experience of social presence, and by providing an experimental platform that can be systematically and fully controlled. To make this field of research accessible to a broader audience, we elaborate in the first part of the review the validity of using VCs in non-verbal behavior research on HFA, and we review current relevant paradigms and findings from social-cognitive neuroscience. In the second part, we argue for the use of VCs as either agents or avatars in the context of “transformed social interactions.” This allows for the implementation of real-time social interaction in virtual experimental settings, which represents a more sensitive measure of socio-communicative impairments in HFA. Finally, we argue that VCs and environments are a valuable assistive, educational and therapeutic tool for HFA. PMID:25360098
Sharma, Amitabh; Menche, Jörg; Huang, C. Chris; Ort, Tatiana; Zhou, Xiaobo; Kitsak, Maksim; Sahni, Nidhi; Thibault, Derek; Voung, Linh; Guo, Feng; Ghiassian, Susan Dina; Gulbahce, Natali; Baribaud, Frédéric; Tocker, Joel; Dobrin, Radu; Barnathan, Elliot; Liu, Hao; Panettieri, Reynold A.; Tantisira, Kelan G.; Qiu, Weiliang; Raby, Benjamin A.; Silverman, Edwin K.; Vidal, Marc; Weiss, Scott T.; Barabási, Albert-László
2015-01-01
Recent advances in genetics have spurred rapid progress towards the systematic identification of genes involved in complex diseases. Still, the detailed understanding of the molecular and physiological mechanisms through which these genes affect disease phenotypes remains a major challenge. Here, we identify the asthma disease module, i.e. the local neighborhood of the interactome whose perturbation is associated with asthma, and validate it for functional and pathophysiological relevance, using both computational and experimental approaches. We find that the asthma disease module is enriched with modest GWAS P-values against the background of random variation, and with differentially expressed genes from normal and asthmatic fibroblast cells treated with an asthma-specific drug. The asthma module also contains immune response mechanisms that are shared with other immune-related disease modules. Further, using diverse omics (genomics, gene-expression, drug response) data, we identify the GAB1 signaling pathway as an important novel modulator in asthma. The wiring diagram of the uncovered asthma module suggests a relatively close link between GAB1 and glucocorticoids (GCs), which we experimentally validate, observing an increase in the level of GAB1 after GC treatment in BEAS-2B bronchial epithelial cells. The siRNA knockdown of GAB1 in the BEAS-2B cell line resulted in a decrease in the NFkB level, suggesting a novel regulatory path of the pro-inflammatory factor NFkB by GAB1 in asthma. PMID:25586491
Georgescu, Alexandra Livia; Kuzmanovic, Bojana; Roth, Daniel; Bente, Gary; Vogeley, Kai
2014-01-01
High-functioning autism (HFA) is a neurodevelopmental disorder, which is characterized by life-long socio-communicative impairments on the one hand and preserved verbal and general learning and memory abilities on the other. One of the areas where particular difficulties are observable is the understanding of non-verbal communication cues. Thus, investigating the underlying psychological processes and neural mechanisms of non-verbal communication in HFA allows a better understanding of this disorder, and potentially enables the development of more efficient forms of psychotherapy and trainings. However, the research on non-verbal information processing in HFA faces several methodological challenges. The use of virtual characters (VCs) helps to overcome such challenges by enabling an ecologically valid experience of social presence, and by providing an experimental platform that can be systematically and fully controlled. To make this field of research accessible to a broader audience, we elaborate in the first part of the review the validity of using VCs in non-verbal behavior research on HFA, and we review current relevant paradigms and findings from social-cognitive neuroscience. In the second part, we argue for the use of VCs as either agents or avatars in the context of "transformed social interactions." This allows for the implementation of real-time social interaction in virtual experimental settings, which represents a more sensitive measure of socio-communicative impairments in HFA. Finally, we argue that VCs and environments are a valuable assistive, educational and therapeutic tool for HFA.
Comas, Jorge; Benfeitas, Rui; Vilaprinyo, Ester; Sorribas, Albert; Solsona, Francesc; Farré, Gemma; Berman, Judit; Zorrilla, Uxue; Capell, Teresa; Sandmann, Gerhard; Zhu, Changfu; Christou, Paul; Alves, Rui
2016-09-01
Plant synthetic biology is still in its infancy. However, synthetic biology approaches have been used to manipulate and improve the nutritional and health value of staple food crops such as rice, potato and maize. With current technologies, production yields of the synthetic nutrients are a result of trial and error, and systematic rational strategies to optimize those yields are still lacking. Here, we present a workflow that combines gene expression and quantitative metabolomics with mathematical modeling to identify strategies for increasing production yields of nutritionally important carotenoids in the seed endosperm synthesized through alternative biosynthetic pathways in synthetic lines of white maize, which is normally devoid of carotenoids. Quantitative metabolomics and gene expression data are used to create and fit parameters of mathematical models that are specific to four independent maize lines. Sensitivity analysis and simulation of each model is used to predict which gene activities should be further engineered in order to increase production yields for carotenoid accumulation in each line. Some of these predictions (e.g. increasing Zmlycb/Gllycb will increase accumulated β-carotenes) are valid across the four maize lines and consistent with experimental observations in other systems. Other predictions are line specific. The workflow is adaptable to any other biological system for which appropriate quantitative information is available. Furthermore, we validate some of the predictions using experimental data from additional synthetic maize lines for which no models were developed. © 2016 The Authors The Plant Journal © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Takeda, M.; Nakajima, H.; Zhang, M.; Hiratsuka, T.
2008-04-01
To obtain reliable diffusion parameters for diffusion testing, multiple experiments should not only be cross-checked but the internal consistency of each experiment should also be verified. In the through- and in-diffusion tests with solution reservoirs, test interpretation of different phases often makes use of simplified analytical solutions. This study explores the feasibility of steady, quasi-steady, equilibrium and transient-state analyses using simplified analytical solutions with respect to (i) valid conditions for each analytical solution, (ii) potential error, and (iii) experimental time. For increased generality, a series of numerical analyses are performed using unified dimensionless parameters and the results are all related to dimensionless reservoir volume (DRV) which includes only the sorptive parameter as an unknown. This means the above factors can be investigated on the basis of the sorption properties of the testing material and/or tracer. The main findings are that steady, quasi-steady and equilibrium-state analyses are applicable when the tracer is not highly sorptive. However, quasi-steady and equilibrium-state analyses become inefficient or impractical compared to steady state analysis when the tracer is non-sorbing and material porosity is significantly low. Systematic and comprehensive reformulation of analytical models enables the comparison of experimental times between different test methods. The applicability and potential error of each test interpretation can also be studied. These can be applied in designing, performing, and interpreting diffusion experiments by deducing DRV from the available information for the target material and tracer, combined with the results of this study.
Network control principles predict neuron function in the Caenorhabditis elegans connectome
NASA Astrophysics Data System (ADS)
Yan, Gang; Vértes, Petra E.; Towlson, Emma K.; Chew, Yee Lian; Walker, Denise S.; Schafer, William R.; Barabási, Albert-László
2017-10-01
Recent studies on the controllability of complex systems offer a powerful mathematical framework to systematically explore the structure-function relationship in biological, social, and technological networks. Despite theoretical advances, we lack direct experimental proof of the validity of these widely used control principles. Here we fill this gap by applying a control framework to the connectome of the nematode Caenorhabditis elegans, allowing us to predict the involvement of each C. elegans neuron in locomotor behaviours. We predict that control of the muscles or motor neurons requires 12 neuronal classes, which include neuronal groups previously implicated in locomotion by laser ablation, as well as one previously uncharacterized neuron, PDB. We validate this prediction experimentally, finding that the ablation of PDB leads to a significant loss of dorsoventral polarity in large body bends. Importantly, control principles also allow us to investigate the involvement of individual neurons within each neuronal class. For example, we predict that, within the class of DD motor neurons, only three (DD04, DD05, or DD06) should affect locomotion when ablated individually. This prediction is also confirmed; single cell ablations of DD04 or DD05 specifically affect posterior body movements, whereas ablations of DD02 or DD03 do not. Our predictions are robust to deletions of weak connections, missing connections, and rewired connections in the current connectome, indicating the potential applicability of this analytical framework to larger and less well-characterized connectomes.
Network control principles predict neuron function in the Caenorhabditis elegans connectome.
Yan, Gang; Vértes, Petra E; Towlson, Emma K; Chew, Yee Lian; Walker, Denise S; Schafer, William R; Barabási, Albert-László
2017-10-26
Recent studies on the controllability of complex systems offer a powerful mathematical framework to systematically explore the structure-function relationship in biological, social, and technological networks. Despite theoretical advances, we lack direct experimental proof of the validity of these widely used control principles. Here we fill this gap by applying a control framework to the connectome of the nematode Caenorhabditis elegans, allowing us to predict the involvement of each C. elegans neuron in locomotor behaviours. We predict that control of the muscles or motor neurons requires 12 neuronal classes, which include neuronal groups previously implicated in locomotion by laser ablation, as well as one previously uncharacterized neuron, PDB. We validate this prediction experimentally, finding that the ablation of PDB leads to a significant loss of dorsoventral polarity in large body bends. Importantly, control principles also allow us to investigate the involvement of individual neurons within each neuronal class. For example, we predict that, within the class of DD motor neurons, only three (DD04, DD05, or DD06) should affect locomotion when ablated individually. This prediction is also confirmed; single cell ablations of DD04 or DD05 specifically affect posterior body movements, whereas ablations of DD02 or DD03 do not. Our predictions are robust to deletions of weak connections, missing connections, and rewired connections in the current connectome, indicating the potential applicability of this analytical framework to larger and less well-characterized connectomes.
ERIC Educational Resources Information Center
Hawken, Leanne S.; Bundock, Kaitlin; Kladis, Kristin; O'Keeffe, Breda; Barret, Courtenay A.
2014-01-01
The purpose of this systematic literature review was to summarize outcomes of the Check-in Check-out (CICO) intervention across elementary and secondary settings. Twenty-eight studies utilizing both single subject and group (experimental and quasi-experimental) designs were included in this review. Median effect sizes across the eight group…
Threats to the Internal Validity of Experimental and Quasi-Experimental Research in Healthcare.
Flannelly, Kevin J; Flannelly, Laura T; Jankowski, Katherine R B
2018-01-01
The article defines, describes, and discusses the seven threats to the internal validity of experiments discussed by Donald T. Campbell in his classic 1957 article: history, maturation, testing, instrument decay, statistical regression, selection, and mortality. These concepts are said to be threats to the internal validity of experiments because they pose alternate explanations for the apparent causal relationship between the independent variable and dependent variable of an experiment if they are not adequately controlled. A series of simple diagrams illustrate three pre-experimental designs and three true experimental designs discussed by Campbell in 1957 and several quasi-experimental designs described in his book written with Julian C. Stanley in 1966. The current article explains why each design controls for or fails to control for these seven threats to internal validity.
Emotional intelligence in sport and exercise: A systematic review.
Laborde, S; Dosseville, F; Allen, M S
2016-08-01
This review targets emotional intelligence (EI) in sport and physical activity. We systematically review the available literature and offer a sound theoretical integration of differing EI perspectives (the tripartite model of EI) before considering applied practice in the form of EI training. Our review identified 36 studies assessing EI in an athletic or physical activity context. EI has most often been conceptualized as a trait. In the context of sport performance, we found that EI relates to emotions, physiological stress responses, successful psychological skill usage, and more successful athletic performance. In the context of physical activity, we found that trait EI relates to physical activity levels and positive attitudes toward physical activity. There was a shortage of research into the EI of coaches, officials, and spectators, non-adult samples, and longitudinal and experimental methods. The tripartite model proposes that EI operates on three levels - knowledge, ability, and trait - and predicts an interplay between the different levels of EI. We present this framework as a promising alternative to trait and ability EI conceptualizations that can guide applied research and professional practice. Further research into EI training, measurement validation and cultural diversity is recommended. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Multiscale Modeling of Hematologic Disorders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fedosov, Dmitry A.; Pivkin, Igor; Pan, Wenxiao
Parasitic infectious diseases and other hereditary hematologic disorders are often associated with major changes in the shape and viscoelastic properties of red blood cells (RBCs). Such changes can disrupt blood flow and even brain perfusion, as in the case of cerebral malaria. Modeling of these hematologic disorders requires a seamless multiscale approach, where blood cells and blood flow in the entire arterial tree are represented accurately using physiologically consistent parameters. In this chapter, we present a computational methodology based on dissipative particle dynamics (DPD) which models RBCs as well as whole blood in health and disease. DPD is a Lagrangianmore » method that can be derived from systematic coarse-graining of molecular dynamics but can scale efficiently up to small arteries and can also be used to model RBCs down to spectrin level. To this end, we present two complementary mathematical models for RBCs and describe a systematic procedure on extracting the relevant input parameters from optical tweezers and microfluidic experiments for single RBCs. We then use these validated RBC models to predict the behavior of whole healthy blood and compare with experimental results. The same procedure is applied to modeling malaria, and results for infected single RBCs and whole blood are presented.« less
Campbell, Karen; Taylor, Vanessa; Douglas, Sheila
2017-12-12
Embedding online learning within higher education can provide engaging, cost-effective, interactive and flexible education. By evaluating the impact, outcomes and pedagogical influence of online cancer and education, future curricula can be shaped and delivered by higher education providers to better meet learner, health care provider and educational commissioners' requirements for enhanced patient care and service delivery needs. Using the Kirkpatrick's four-level model of educational evaluation, a systematic review of the effectiveness of online cancer education for nurses and allied health professionals was conducted. From 101 articles, 30 papers were included in the review. Educational theory is not always employed. There is an absence of longitudinal studies to examine impact; an absence of reliability and/or validity testing of measures, limited experimental designs taking account of power and few attempts to mitigate bias. There is, however, an emerging innovative use of mobile/spaced learning techniques. Evidence for clinical and educational effectiveness is weak offering insights into experiences and participant perceptions rather than concrete quantitative data and patient-reported outcomes. More pedagogical research is merited to inform effective evaluation of online cancer education, which incorporates and demonstrates a longer-term impact.
Eye Tracking Outcomes in Tobacco Control Regulation and Communication: A Systematic Review.
Meernik, Clare; Jarman, Kristen; Wright, Sarah Towner; Klein, Elizabeth G; Goldstein, Adam O; Ranney, Leah
2016-10-01
In this paper we synthesize the evidence from eye tracking research in tobacco control to inform tobacco regulatory strategies and tobacco communication campaigns. We systematically searched 11 databases for studies that reported eye tracking outcomes in regards to tobacco regulation and communication. Two coders independently reviewed studies for inclusion and abstracted study characteristics and findings. Eighteen studies met full criteria for inclusion. Eye tracking studies on health warnings consistently showed these warnings often were ignored, though eye tracking demonstrated that novel warnings, graphic warnings, and plain packaging can increase attention toward warnings. Eye tracking also revealed that greater visual attention to warnings on advertisements and packages consistently was associated with cognitive processing as measured by warning recall. Eye tracking is a valid indicator of attention, cognitive processing, and memory. The use of this technology in tobacco control research complements existing methods in tobacco regulatory and communication science; it also can be used to examine the effects of health warnings and other tobacco product communications on consumer behavior in experimental settings prior to the implementation of novel health communication policies. However, the utility of eye tracking will be enhanced by the standardization of methodology and reporting metrics.
Eye Tracking Outcomes in Tobacco Control Regulation and Communication: A Systematic Review
Meernik, Clare; Jarman, Kristen; Wright, Sarah Towner; Klein, Elizabeth G.; Goldstein, Adam O.; Ranney, Leah
2016-01-01
Objective In this paper we synthesize the evidence from eye tracking research in tobacco control to inform tobacco regulatory strategies and tobacco communication campaigns. Methods We systematically searched 11 databases for studies that reported eye tracking outcomes in regards to tobacco regulation and communication. Two coders independently reviewed studies for inclusion and abstracted study characteristics and findings. Results Eighteen studies met full criteria for inclusion. Eye tracking studies on health warnings consistently showed these warnings often were ignored, though eye tracking demonstrated that novel warnings, graphic warnings, and plain packaging can increase attention toward warnings. Eye tracking also revealed that greater visual attention to warnings on advertisements and packages consistently was associated with cognitive processing as measured by warning recall. Conclusions Eye tracking is a valid indicator of attention, cognitive processing, and memory. The use of this technology in tobacco control research complements existing methods in tobacco regulatory and communication science; it also can be used to examine the effects of health warnings and other tobacco product communications on consumer behavior in experimental settings prior to the implementation of novel health communication policies. However, the utility of eye tracking will be enhanced by the standardization of methodology and reporting metrics. PMID:27668270
Modeling of acoustic emission signal propagation in waveguides.
Zelenyak, Andreea-Manuela; Hamstad, Marvin A; Sause, Markus G R
2015-05-21
Acoustic emission (AE) testing is a widely used nondestructive testing (NDT) method to investigate material failure. When environmental conditions are harmful for the operation of the sensors, waveguides are typically mounted in between the inspected structure and the sensor. Such waveguides can be built from different materials or have different designs in accordance with the experimental needs. All these variations can cause changes in the acoustic emission signals in terms of modal conversion, additional attenuation or shift in frequency content. A finite element method (FEM) was used to model acoustic emission signal propagation in an aluminum plate with an attached waveguide and was validated against experimental data. The geometry of the waveguide is systematically changed by varying the radius and height to investigate the influence on the detected signals. Different waveguide materials were implemented and change of material properties as function of temperature were taken into account. Development of the option of modeling different waveguide options replaces the time consuming and expensive trial and error alternative of experiments. Thus, the aim of this research has important implications for those who use waveguides for AE testing.
Baevskiĭ, R M; Bogomolov, V V; Funtova, I I; Slepchenkova, I N; Chernikova, A G
2009-01-01
Methods of investigating the physiological functions in space crews on extended missions during night sleep are of much fundamental and practical substance. The design of experiment "Sonocard" utilizes the method of seismocardiography. Purpose of the experiment is to validate the procedures of noncontact in-sleep physiological data recoding which are potent to enhance the space crew medical operations system. The experiment was performed systematically by ISS Russian crew members starting from mission-16. The experimental procedure is easy and does not cause discomfort to human subjects. Results of the initial experimental sessions demonstrated that, as on Earth, sleep in microgravity is crucial for the recovery of body functional reserves and that the innovative technology is instrumental in studying the recovery processes as well as person unique patterns of adaptation to extended space mission. It also allows conclusions about sleep quality, mechanisms of recreation, and body functionality. These data may enrich substantially the information used by medical operators of the space missions control centers.
Investigation of p-type depletion doping for InGaN/GaN-based light-emitting diodes
NASA Astrophysics Data System (ADS)
Zhang, Yiping; Zhang, Zi-Hui; Tan, Swee Tiam; Hernandez-Martinez, Pedro Ludwig; Zhu, Binbin; Lu, Shunpeng; Kang, Xue Jun; Sun, Xiao Wei; Demir, Hilmi Volkan
2017-01-01
Due to the limitation of the hole injection, p-type doping is essential to improve the performance of InGaN/GaN multiple quantum well light-emitting diodes (LEDs). In this work, we propose and show a depletion-region Mg-doping method. Here we systematically analyze the effectiveness of different Mg-doping profiles ranging from the electron blocking layer to the active region. Numerical computations show that the Mg-doping decreases the valence band barrier for holes and thus enhances the hole transportation. The proposed depletion-region Mg-doping approach also increases the barrier height for electrons, which leads to a reduced electron overflow, while increasing the hole concentration in the p-GaN layer. Experimentally measured external quantum efficiency indicates that Mg-doping position is vitally important. The doping in or adjacent to the quantum well degrades the LED performance due to Mg diffusion, increasing the corresponding nonradiative recombination, which is well supported by the measured carrier lifetimes. The experimental results are well numerically reproduced by modifying the nonradiative recombination lifetimes, which further validate the effectiveness of our approach.
PACS—Realization of an adaptive concept using pressure actuated cellular structures
NASA Astrophysics Data System (ADS)
Gramüller, B.; Boblenz, J.; Hühne, C.
2014-10-01
A biologically inspired concept is investigated which can be utilized to develop energy efficient, lightweight and applicational flexible adaptive structures. Building a real life morphing unit is an ambitious task as the numerous works in the particular field show. Summarizing fundamental demands and barriers regarding shape changing structures, the basic challenges of designing morphing structures are listed. The concept of Pressure Actuated Cellular Structures (PACS) is arranged within the recent morphing activities and it is shown that it complies with the underlying demands. Systematically divided into energy-related and structural subcomponents the working principle is illuminated and relationships between basic design parameters are expressed. The analytical background describing the physical mechanisms of PACS is presented in concentrated manner. This work focuses on the procedure of dimensioning, realizing and experimental testing of a single cell and a single row cantilever made of PACS. The experimental outcomes as well as the results from the FEM computations are used for evaluating the analytical methods. The functionality of the basic principle is thus validated and open issues are determined pointing the way ahead.
Experimental validation of a self-calibrating cryogenic mass flowmeter
NASA Astrophysics Data System (ADS)
Janzen, A.; Boersch, M.; Burger, B.; Drache, J.; Ebersoldt, A.; Erni, P.; Feldbusch, F.; Oertig, D.; Grohmann, S.
2017-12-01
The Karlsruhe Institute of Technology (KIT) and the WEKA AG jointly develop a commercial flowmeter for application in helium cryostats. The flowmeter functions according to a new thermal measurement principle that eliminates all systematic uncertainties and enables self-calibration during real operation. Ideally, the resulting uncertainty of the measured flow rate is only dependent on signal noises, which are typically very small with regard to the measured value. Under real operating conditions, cryoplant-dependent flow rate fluctuations induce an additional uncertainty, which follows from the sensitivity of the method. This paper presents experimental results with helium at temperatures between 30 and 70 K and flow rates in the range of 4 to 12 g/s. The experiments were carried out in a control cryostat of the 2 kW helium refrigerator of the TOSKA test facility at KIT. Inside the cryostat, the new flowmeter was installed in series with a Venturi tube that was used for reference measurements. The measurement results demonstrate the self-calibration capability during real cryoplant operation. The influences of temperature and flow rate fluctuations on the self-calibration uncertainty are discussed.
Cavity-coupled double-quantum dot at finite bias: Analogy with lasers and beyond
NASA Astrophysics Data System (ADS)
Kulkarni, Manas; Cotlet, Ovidiu; Türeci, Hakan E.
2014-09-01
We present a theoretical and experimental study of photonic and electronic transport properties of a voltage biased InAs semiconductor double quantum dot (DQD) that is dipole coupled to a superconducting transmission line resonator. We obtain the master equation for the reduced density matrix of the coupled system of cavity photons and DQD electrons accounting systematically for both the presence of phonons and the effect of leads at finite voltage bias. We subsequently derive analytical expressions for transmission, phase response, photon number, and the nonequilibrium steady-state electron current. We show that the coupled system under finite bias realizes an unconventional version of a single-atom laser and analyze the spectrum and the statistics of the photon flux leaving the cavity. In the transmission mode, the system behaves as a saturable single-atom amplifier for the incoming photon flux. Finally, we show that the back action of the photon emission on the steady-state current can be substantial. Our analytical results are compared to exact master equation results establishing regimes of validity of various analytical models. We compare our findings to available experimental measurements.
Experimental determination of the correlation properties of plasma turbulence using 2D BES systems
NASA Astrophysics Data System (ADS)
Fox, M. F. J.; Field, A. R.; van Wyk, F.; Ghim, Y.-c.; Schekochihin, A. A.; the MAST Team
2017-04-01
A procedure is presented to map from the spatial correlation parameters of a turbulent density field (the radial and binormal correlation lengths and wavenumbers, and the fluctuation amplitude) to correlation parameters that would be measured by a beam emission spectroscopy (BES) diagnostic. The inverse mapping is also derived, which results in resolution criteria for recovering correct correlation parameters, depending on the spatial response of the instrument quantified in terms of point-spread functions (PSFs). Thus, a procedure is presented that allows for a systematic comparison between theoretical predictions and experimental observations. This procedure is illustrated using the Mega-Ampere Spherical Tokamak BES system and the validity of the underlying assumptions is tested on fluctuating density fields generated by direct numerical simulations using the gyrokinetic code GS2. The measurement of the correlation time, by means of the cross-correlation time-delay method, is also investigated and is shown to be sensitive to the fluctuating radial component of velocity, as well as to small variations in the spatial properties of the PSFs.
Song, Mingkai; Cui, Linlin; Kuang, Han; Zhou, Jingwei; Yang, Pengpeng; Zhuang, Wei; Chen, Yong; Liu, Dong; Zhu, Chenjie; Chen, Xiaochun; Ying, Hanjie; Wu, Jinglan
2018-08-10
An intermittent simulated moving bed (3F-ISMB) operation scheme, the extension of the 3W-ISMB to the non-linear adsorption region, has been introduced for separation of glucose, lactic acid and acetic acid ternary-mixture. This work focuses on exploring the feasibility of the proposed process theoretically and experimentally. Firstly, the real 3F-ISMB model coupled with the transport dispersive model (TDM) and the Modified-Langmuir isotherm was established to build up the separation parameter plane. Subsequently, three operating conditions were selected from the plane to run the 3F-ISMB unit. The experimental results were used to verify the model. Afterwards, the influences of the various flow rates on the separation performances were investigated systematically by means of the validated 3F-ISMB model. The intermittent-retained component lactic acid was finally obtained with the purity of 98.5%, recovery of 95.5% and the average concentration of 38 g/L. The proposed 3F-ISMB process can efficiently separate the mixture with low selectivity into three fractions. Copyright © 2018 Elsevier B.V. All rights reserved.
Fluid Dynamics and Thermodynamics of Vapor Phase Crystal Growth
NASA Technical Reports Server (NTRS)
Wiedemeier, H.
1985-01-01
The ground-based research effort under this program is concerned with systematic studies of the effects of variations: (1) of the relative importance of buoyancy-driven convection, and (2) of diffusion and viscosity conditions on crystal properties. These experimental studies are supported by thermodynamic characterizations of the systems, based on which fluid dynamic parameters can be determined. The specific materials under investigation include: the GeSe-GeI4, Ge-GeI4, HgTe-HgI2, and Hg sub (1-x)Cd sub (x) Te-HgI2 systems. Mass transport rate studies of the GeSe-GeI system as a function of orientation of the density gradient relative to the gravity vector demonstrated the validity of flux anomalies observed in earlier space experiments. The investigation of the effects of inert gases on mass flux yielded the first experimental evidence for the existence of a boundary layer in closed ampoules. Combined with a thorough thermodynamic analysis, a transport model for diffusive flow including chemical vapor transport, sublimation, and Stefan flow was developed.
Coordinated photomorphogenic UV-B signaling network captured by mathematical modeling.
Ouyang, Xinhao; Huang, Xi; Jin, Xiao; Chen, Zheng; Yang, Panyu; Ge, Hao; Li, Shigui; Deng, Xing Wang
2014-08-05
Long-wavelength and low-fluence UV-B light is an informational signal known to induce photomorphogenic development in plants. Using the model plant Arabidopsis thaliana, a variety of factors involved in UV-B-specific signaling have been experimentally characterized over the past decade, including the UV-B light receptor UV resistance locus 8; the positive regulators constitutive photomorphogenesis 1 and elongated hypocotyl 5; and the negative regulators cullin4, repressor of UV-B photomorphogenesis 1 (RUP1), and RUP2. Individual genetic and molecular studies have revealed that these proteins function in either positive or negative regulatory capacities for the sufficient and balanced transduction of photomorphogenic UV-B signal. Less is known, however, regarding how these signaling events are systematically linked. In our study, we use a systems biology approach to investigate the dynamic behaviors and correlations of multiple signaling components involved in Arabidopsis UV-B-induced photomorphogenesis. We define a mathematical representation of photomorphogenic UV-B signaling at a temporal scale. Supplemented with experimental validation, our computational modeling demonstrates the functional interaction that occurs among different protein complexes in early and prolonged response to photomorphogenic UV-B.
Coarse grained modeling of directed assembly to form functional nanoporous films
NASA Astrophysics Data System (ADS)
Al Khatib, Amir
A coarse-grained (CG) simulation of polyethylene glycol (PEG) and Polymethylsilsesquixane nanoparticle (PMSSQ) referred to as (NP) at different sizes and concentrations were done using the Martini coarse-grained (CG) force field. The interactions between CG PEG and CG NP were parameterized from the chemical compound of each molecule and based on Martini force field. NP particles migrates to the surface of the substrate in an agreement with the experimental output at high temperature of 800K. This demonstration of nanoparticles-polymer film to direct it to self-assemble a systematically spatial pattern using the substrate surface energy as the key gating parameter. Validation of the model comparing molecular dynamics simulations with experimental data collected from previous study. NP interaction with the substrate at low interactions energy using Lennard-Johns potential were able to direct the NP to self-assemble in a hexagonal shape up to 4 layers above the substrate. This thesis established that substrate surface energy is a key gating parameter to direct the collective behavior of functional nanoparticles to form thin nanoporous films with spatially predetermined optical/dielectric constants.
Patient-Reported Outcome Measures for Hand and Wrist Trauma
Dacombe, Peter Jonathan; Amirfeyz, Rouin; Davis, Tim
2016-01-01
Background: Patient-reported outcome measures (PROMs) are important tools for assessing outcomes following injuries to the hand and wrist. Many commonly used PROMs have no evidence of reliability, validity, and responsiveness in a hand and wrist trauma population. This systematic review examines the PROMs used in the assessment of hand and wrist trauma patients, and the evidence for reliability, validity, and responsiveness of each measure in this population. Methods: A systematic review of Pubmed, Medline, and CINAHL searching for randomized controlled trials of patients with traumatic injuries to the hand and wrist was carried out to identify the PROMs. For each identified PROM, evidence of reliability, validity, and responsiveness was identified using a further systematic review of the Pubmed, Medline, CINAHL, and reverse citation trail audit procedure. Results: The PROM used most often was the Disabilities of the Arm, Shoulder and Hand (DASH) questionnaire; the Patient-Rated Wrist Evaluation (PRWE), Gartland and Werley score, Michigan Hand Outcomes score, Mayo Wrist Score, and Short Form 36 were also commonly used. Only the DASH and PRWE have evidence of reliability, validity, and responsiveness in patients with traumatic injuries to the hand and wrist; other measures either have incomplete evidence or evidence gathered in a nontraumatic population. Conclusions: The DASH and PRWE both have evidence of reliability, validity, and responsiveness in a hand and wrist trauma population. Other PROMs used to assess hand and wrist trauma patients do not. This should be considered when selecting a PROM for patients with traumatic hand and wrist pathology. PMID:27418884
Dreier, Maren; Borutta, Birgit; Stahmeyer, Jona; Krauth, Christian; Walter, Ulla
2010-06-14
HEALTH CARE POLICY BACKGROUND: Findings from scientific studies form the basis for evidence-based health policy decisions. Quality assessments to evaluate the credibility of study results are an essential part of health technology assessment reports and systematic reviews. Quality assessment tools (QAT) for assessing the study quality examine to what extent study results are systematically distorted by confounding or bias (internal validity). The tools can be divided into checklists, scales and component ratings. What QAT are available to assess the quality of interventional studies or studies in the field of health economics, how do they differ from each other and what conclusions can be drawn from these results for quality assessments? A systematic search of relevant databases from 1988 onwards is done, supplemented by screening of the references, of the HTA reports of the German Agency for Health Technology Assessment (DAHTA) and an internet search. The selection of relevant literature, the data extraction and the quality assessment are carried out by two independent reviewers. The substantive elements of the QAT are extracted using a modified criteria list consisting of items and domains specific to randomized trials, observational studies, diagnostic studies, systematic reviews and health economic studies. Based on the number of covered items and domains, more and less comprehensive QAT are distinguished. In order to exchange experiences regarding problems in the practical application of tools, a workshop is hosted. A total of eight systematic methodological reviews is identified as well as 147 QAT: 15 for systematic reviews, 80 for randomized trials, 30 for observational studies, 17 for diagnostic studies and 22 for health economic studies. The tools vary considerably with regard to the content, the performance and quality of operationalisation. Some tools do not only include the items of internal validity but also the items of quality of reporting and external validity. No tool covers all elements or domains. Design-specific generic tools are presented, which cover most of the content criteria. The evaluation of QAT by using content criteria is difficult, because there is no scientific consensus on the necessary elements of internal validity, and not all of the generally accepted elements are based on empirical evidence. Comparing QAT with regard to contents neglects the operationalisation of the respective parameters, for which the quality and precision are important for transparency, replicability, the correct assessment and interrater reliability. QAT, which mix items on the quality of reporting and internal validity, should be avoided. There are different, design-specific tools available which can be preferred for quality assessment, because of its wider coverage of substantive elements of internal validity. To minimise the subjectivity of the assessment, tools with a detailed and precise operationalisation of the individual elements should be applied. For health economic studies, tools should be developed and complemented with instructions, which define the appropriateness of the criteria. Further research is needed to identify study characteristics that influence the internal validity of studies.
Software validation applied to spreadsheets used in laboratories working under ISO/IEC 17025
NASA Astrophysics Data System (ADS)
Banegas, J. M.; Orué, M. W.
2016-07-01
Several documents deal with software validation. Nevertheless, more are too complex to be applied to validate spreadsheets - surely the most used software in laboratories working under ISO/IEC 17025. The method proposed in this work is intended to be directly applied to validate spreadsheets. It includes a systematic way to document requirements, operational aspects regarding to validation, and a simple method to keep records of validation results and modifications history. This method is actually being used in an accredited calibration laboratory, showing to be practical and efficient.
de Boer, Pieter T; Frederix, Geert W J; Feenstra, Talitha L; Vemer, Pepijn
2016-09-01
Transparent reporting of validation efforts of health economic models give stakeholders better insight into the credibility of model outcomes. In this study we reviewed recently published studies on seasonal influenza and early breast cancer in order to gain insight into the reporting of model validation efforts in the overall health economic literature. A literature search was performed in Pubmed and Embase to retrieve health economic modelling studies published between 2008 and 2014. Reporting on model validation was evaluated by checking for the word validation, and by using AdViSHE (Assessment of the Validation Status of Health Economic decision models), a tool containing a structured list of relevant items for validation. Additionally, we contacted corresponding authors to ask whether more validation efforts were performed other than those reported in the manuscripts. A total of 53 studies on seasonal influenza and 41 studies on early breast cancer were included in our review. The word validation was used in 16 studies (30 %) on seasonal influenza and 23 studies (56 %) on early breast cancer; however, in a minority of studies, this referred to a model validation technique. Fifty-seven percent of seasonal influenza studies and 71 % of early breast cancer studies reported one or more validation techniques. Cross-validation of study outcomes was found most often. A limited number of studies reported on model validation efforts, although good examples were identified. Author comments indicated that more validation techniques were performed than those reported in the manuscripts. Although validation is deemed important by many researchers, this is not reflected in the reporting habits of health economic modelling studies. Systematic reporting of validation efforts would be desirable to further enhance decision makers' confidence in health economic models and their outcomes.
Rosário, Susel; Fonseca, João A; Nienhaus, Albert; da Costa, José Torres
2016-01-01
Previous studies of psychosocial work factors have indicated their importance for workers' health. However, to what extent health problems can be attributed to the nature of the work environment or other psychosocial factors is not clear. No previous systematic review has used inclusion criteria based on specific medical evaluation of work-related health outcomes and the use of validated instruments for the assessment of the psychosocial (work) environment. The aim of this systematic review is to summarize the evidence assessing the relationship between the psychosocial work environment and workers' health based on studies that used standardized and validated instruments to assess the psychosocial work environment and that focused on medically confirmed health outcomes. A systematic review of the literature was carried out by searching the databases PubMed, B-ON, Science Direct, Psycarticles, Psychology and Behavioral Sciences Collection and the search engine (Google Scholar) using appropriate words for studies published from 2004 to 2014. This review follows the recommendations of the Statement for Reporting Systematic Reviews (PRISMA). Studies were included in the review if data on psychosocial validated assessment method(s) for the study population and specific medical evaluation of health-related work outcome(s) were presented. In total, the search strategy yielded 10,623 references, of which 10 studies (seven prospective cohort and three cross-sectional) met the inclusion criteria. Most studies (7/10) observed an adverse effect of poor psychosocial work factors on workers' health: 3 on sickness absence, 4 on cardiovascular diseases. The other 3 studies reported detrimental effects on sleep and on disease-associated biomarkers. A more consistent effect was observed in studies of higher methodological quality that used a prospective design jointly with the use of validated instruments for the assessment of the psychosocial (work) environment and clinical evaluation. More prospective studies are needed to assess the evidence of work-related psychosocial factors on workers´ health.
Lichtner, Valentina; Dowding, Dawn; Esterhuizen, Philip; Closs, S José; Long, Andrew F; Corbett, Anne; Briggs, Michelle
2014-12-17
There is evidence of under-detection and poor management of pain in patients with dementia, in both long-term and acute care. Accurate assessment of pain in people with dementia is challenging and pain assessment tools have received considerable attention over the years, with an increasing number of tools made available. Systematic reviews on the evidence of their validity and utility mostly compare different sets of tools. This review of systematic reviews analyses and summarises evidence concerning the psychometric properties and clinical utility of pain assessment tools in adults with dementia or cognitive impairment. We searched for systematic reviews of pain assessment tools providing evidence of reliability, validity and clinical utility. Two reviewers independently assessed each review and extracted data from them, with a third reviewer mediating when consensus was not reached. Analysis of the data was carried out collaboratively. The reviews were synthesised using a narrative synthesis approach. We retrieved 441 potentially eligible reviews, 23 met the criteria for inclusion and 8 provided data for extraction. Each review evaluated between 8 and 13 tools, in aggregate providing evidence on a total of 28 tools. The quality of the reviews varied and the reporting often lacked sufficient methodological detail for quality assessment. The 28 tools appear to have been studied in a variety of settings and with varied types of patients. The reviews identified several methodological limitations across the original studies. The lack of a 'gold standard' significantly hinders the evaluation of tools' validity. Most importantly, the samples were small providing limited evidence for use of any of the tools across settings or populations. There are a considerable number of pain assessment tools available for use with the elderly cognitive impaired population. However there is limited evidence about their reliability, validity and clinical utility. On the basis of this review no one tool can be recommended given the existing evidence.
2014-01-01
Background Health impairments can result in disability and changed work productivity imposing considerable costs for the employee, employer and society as a whole. A large number of instruments exist to measure health-related productivity changes; however their methodological quality remains unclear. This systematic review critically appraised the measurement properties in generic self-reported instruments that measure health-related productivity changes to recommend appropriate instruments for use in occupational and economic health practice. Methods PubMed, PsycINFO, Econlit and Embase were systematically searched for studies whereof: (i) instruments measured health-related productivity changes; (ii) the aim was to evaluate instrument measurement properties; (iii) instruments were generic; (iv) ratings were self-reported; (v) full-texts were available. Next, methodological quality appraisal was based on COSMIN elements: (i) internal consistency; (ii) reliability; (iii) measurement error; (iv) content validity; (v) structural validity; (vi) hypotheses testing; (vii) cross-cultural validity; (viii) criterion validity; and (ix) responsiveness. Recommendations are based on evidence syntheses. Results This review included 25 articles assessing the reliability, validity and responsiveness of 15 different generic self-reported instruments measuring health-related productivity changes. Most studies evaluated criterion validity, none evaluated cross-cultural validity and information on measurement error is lacking. The Work Limitation Questionnaire (WLQ) was most frequently evaluated with moderate respectively strong positive evidence for content and structural validity and negative evidence for reliability, hypothesis testing and responsiveness. Less frequently evaluated, the Stanford Presenteeism Scale (SPS) showed strong positive evidence for internal consistency and structural validity, and moderate positive evidence for hypotheses testing and criterion validity. The Productivity and Disease Questionnaire (PRODISQ) yielded strong positive evidence for content validity, evidence for other properties is lacking. The other instruments resulted in mostly fair-to-poor quality ratings with limited evidence. Conclusions Decisions based on the content of the instrument, usage purpose, target country and population, and available evidence are recommended. Until high-quality studies are in place to accurately assess the measurement properties of the currently available instruments, the WLQ and, in a Dutch context, the PRODISQ are cautiously preferred based on its strong positive evidence for content validity. Based on its strong positive evidence for internal consistency and structural validity, the SPS is cautiously recommended. PMID:24495301
Ziatabar Ahmadi, Seyyede Zohreh; Jalaie, Shohreh; Ashayeri, Hassan
2015-09-01
Theory of mind (ToM) or mindreading is an aspect of social cognition that evaluates mental states and beliefs of oneself and others. Validity and reliability are very important criteria when evaluating standard tests; and without them, these tests are not usable. The aim of this study was to systematically review the validity and reliability of published English comprehensive ToM tests developed for normal preschool children. We searched MEDLINE (PubMed interface), Web of Science, Science direct, PsycINFO, and also evidence base Medicine (The Cochrane Library) databases from 1990 to June 2015. Search strategy was Latin transcription of 'Theory of Mind' AND test AND children. Also, we manually studied the reference lists of all final searched articles and carried out a search of their references. Inclusion criteria were as follows: Valid and reliable diagnostic ToM tests published from 1990 to June 2015 for normal preschool children; and exclusion criteria were as follows: the studies that only used ToM tests and single tasks (false belief tasks) for ToM assessment and/or had no description about structure, validity or reliability of their tests. METHODological quality of the selected articles was assessed using the Critical Appraisal Skills Programme (CASP). In primary searching, we found 1237 articles in total databases. After removing duplicates and applying all inclusion and exclusion criteria, we selected 11 tests for this systematic review. There were a few valid, reliable and comprehensive ToM tests for normal preschool children. However, we had limitations concerning the included articles. The defined ToM tests were different in populations, tasks, mode of presentations, scoring, mode of responses, times and other variables. Also, they had various validities and reliabilities. Therefore, it is recommended that the researchers and clinicians select the ToM tests according to their psychometric characteristics, validity and reliability.
Ziatabar Ahmadi, Seyyede Zohreh; Jalaie, Shohreh; Ashayeri, Hassan
2015-01-01
Objective: Theory of mind (ToM) or mindreading is an aspect of social cognition that evaluates mental states and beliefs of oneself and others. Validity and reliability are very important criteria when evaluating standard tests; and without them, these tests are not usable. The aim of this study was to systematically review the validity and reliability of published English comprehensive ToM tests developed for normal preschool children. Method: We searched MEDLINE (PubMed interface), Web of Science, Science direct, PsycINFO, and also evidence base Medicine (The Cochrane Library) databases from 1990 to June 2015. Search strategy was Latin transcription of ‘Theory of Mind’ AND test AND children. Also, we manually studied the reference lists of all final searched articles and carried out a search of their references. Inclusion criteria were as follows: Valid and reliable diagnostic ToM tests published from 1990 to June 2015 for normal preschool children; and exclusion criteria were as follows: the studies that only used ToM tests and single tasks (false belief tasks) for ToM assessment and/or had no description about structure, validity or reliability of their tests. Methodological quality of the selected articles was assessed using the Critical Appraisal Skills Programme (CASP). Result: In primary searching, we found 1237 articles in total databases. After removing duplicates and applying all inclusion and exclusion criteria, we selected 11 tests for this systematic review. Conclusion: There were a few valid, reliable and comprehensive ToM tests for normal preschool children. However, we had limitations concerning the included articles. The defined ToM tests were different in populations, tasks, mode of presentations, scoring, mode of responses, times and other variables. Also, they had various validities and reliabilities. Therefore, it is recommended that the researchers and clinicians select the ToM tests according to their psychometric characteristics, validity and reliability. PMID:27006666
Prognostic models for complete recovery in ischemic stroke: a systematic review and meta-analysis.
Jampathong, Nampet; Laopaiboon, Malinee; Rattanakanokchai, Siwanon; Pattanittum, Porjai
2018-03-09
Prognostic models have been increasingly developed to predict complete recovery in ischemic stroke. However, questions arise about the performance characteristics of these models. The aim of this study was to systematically review and synthesize performance of existing prognostic models for complete recovery in ischemic stroke. We searched journal publications indexed in PUBMED, SCOPUS, CENTRAL, ISI Web of Science and OVID MEDLINE from inception until 4 December, 2017, for studies designed to develop and/or validate prognostic models for predicting complete recovery in ischemic stroke patients. Two reviewers independently examined titles and abstracts, and assessed whether each study met the pre-defined inclusion criteria and also independently extracted information about model development and performance. We evaluated validation of the models by medians of the area under the receiver operating characteristic curve (AUC) or c-statistic and calibration performance. We used a random-effects meta-analysis to pool AUC values. We included 10 studies with 23 models developed from elderly patients with a moderately severe ischemic stroke, mainly in three high income countries. Sample sizes for each study ranged from 75 to 4441. Logistic regression was the only analytical strategy used to develop the models. The number of various predictors varied from one to 11. Internal validation was performed in 12 models with a median AUC of 0.80 (95% CI 0.73 to 0.84). One model reported good calibration. Nine models reported external validation with a median AUC of 0.80 (95% CI 0.76 to 0.82). Four models showed good discrimination and calibration on external validation. The pooled AUC of the two validation models of the same developed model was 0.78 (95% CI 0.71 to 0.85). The performance of the 23 models found in the systematic review varied from fair to good in terms of internal and external validation. Further models should be developed with internal and external validation in low and middle income countries.
Validation of asthma recording in electronic health records: protocol for a systematic review.
Nissen, Francis; Quint, Jennifer K; Wilkinson, Samantha; Mullerova, Hana; Smeeth, Liam; Douglas, Ian J
2017-05-29
Asthma is a common, heterogeneous disease with significant morbidity and mortality worldwide. It can be difficult to define in epidemiological studies using electronic health records as the diagnosis is based on non-specific respiratory symptoms and spirometry, neither of which are routinely registered. Electronic health records can nonetheless be valuable to study the epidemiology, management, healthcare use and control of asthma. For health databases to be useful sources of information, asthma diagnoses should ideally be validated. The primary objectives are to provide an overview of the methods used to validate asthma diagnoses in electronic health records and summarise the results of the validation studies. EMBASE and MEDLINE will be systematically searched for appropriate search terms. The searches will cover all studies in these databases up to October 2016 with no start date and will yield studies that have validated algorithms or codes for the diagnosis of asthma in electronic health records. At least one test validation measure (sensitivity, specificity, positive predictive value, negative predictive value or other) is necessary for inclusion. In addition, we require the validated algorithms to be compared with an external golden standard, such as a manual review, a questionnaire or an independent second database. We will summarise key data including author, year of publication, country, time period, date, data source, population, case characteristics, clinical events, algorithms, gold standard and validation statistics in a uniform table. This study is a synthesis of previously published studies and, therefore, no ethical approval is required. The results will be submitted to a peer-reviewed journal for publication. Results from this systematic review can be used to study outcome research on asthma and can be used to identify case definitions for asthma. CRD42016041798. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
A Comprehensive Validation Methodology for Sparse Experimental Data
NASA Technical Reports Server (NTRS)
Norman, Ryan B.; Blattnig, Steve R.
2010-01-01
A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.
2012-08-01
U0=15m/s, Lv =350m Cloud Wind and Clear Sky Gust Simulation Using Dryden PSD* Harvested Energy from Normal Vibration (Red) to...energy control law based on limited energy constraints 4) Experimentally validated simultaneous energy harvesting and vibration control Summary...Experimental Characterization and Validation of Simultaneous Gust Alleviation and Energy Harvesting for Multifunctional Wing Spars AFOSR
Ernst, D; Clerc, J; Decullier, E; Gavanier, G; Dupuis, O
2012-10-01
At birth, evaluation of neonatal well-being is crucial. It is though important to perform umbilical cord blood gas analysis, and then to analyze the samples. We wanted to establish the feasibility and reliability of systematic umbilical cord blood sampling in a French labour ward. Study of systematic umbilical cord blood gas analysis was realized retrospectively from 1000 consecutive deliveries. We first established the feasibility of the samples. Feasibility was defined by the ratio of complete cord acid-base data on the number of deliveries from alive newborns. Afterwards, we established the reliability on the remaining cord samples. Reliability was the ratio of samples that fulfilled quality criteria defined by Westgate et al. and revised by Kro et al., on the number of complete samples from alive newborns. At last, we looked for factors that would influence these results. The systematic umbilical cord blood sample feasibility reached 91.6%, and the reliability reached 80.7%. About the delivery mode, 38.6% of emergency caesarians (IC 95% [30.8-46.3]; P<0.0001) led to non-valid samples, when only 11.3% of programmed caesarians (IC 95% [4.3-18.2]; P<0.0001) led to non-valid samples. Umbilical cord blood analysis were significantly less validated during emergency caesarians. Realization of systematic cord blood gas analysis was followed by 8.4% of incomplete samples, and by 19.3% that were uninterpretable. Training sessions should be organized to improve the feasibility and reliability, especially during emergency caesarians. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
KOOHESTANI, HAMID REZA; SOLTANI ARABSHAHI, SEYED KAMRAN; FATA, LADAN; AHMADI, FAZLOLLAH
2018-01-01
Introduction: The demand for mobile learning in the medical science educational program is increasing. The present review study gathers evidence highlighted by the experimental studies on the educational effects of mobile learning for medical science students. Methods: The study was carried out as a systematic literature search published from 2007 to July 2017 in the databases PubMed/Medline, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Web of Knowledge (Thomson Reuters) , Educational Resources and Information Center (ERIC), EMBASE (Elsevier), Cochrane library, PsycINFO and Google Scholar. To examine quality of the articles, a tool validated by the BEME Review was employed. Results: Totally, 21 papers entered the study. Three main themes emerged from the content of papers: (1) improvement in student clinical competency and confidence, (2) acquisition and enhancing of students' theoretical knowledge, and (3) students' positive attitudes to and perception of mobile learning. Level 2B of Kirkpatrick hierarchy had been examined by all the papers and seven of them had reported two or more outcome levels, but level 4 was not reported in the papers. Conclusion: Our review showed that the students of medical sciences had positive response and attitudes to mobile learning. Moreover, implementation of mobile learning in medical sciences program might lead to valuable educational benefits and improve clinical competence and confidence along with theoretical knowledge, attitudes, and perception of mobile learning. The results indicated that mobile learning strategy in medical education can positively affect learning in all three domains of Bloom’s Taxonomy. PMID:29607333
Vertical Accuracy Evaluation of Aster GDEM2 Over a Mountainous Area Based on Uav Photogrammetry
NASA Astrophysics Data System (ADS)
Liang, Y.; Qu, Y.; Guo, D.; Cui, T.
2018-05-01
Global digital elevation models (GDEM) provide elementary information on heights of the Earth's surface and objects on the ground. GDEMs have become an important data source for a range of applications. The vertical accuracy of a GDEM is critical for its applications. Nowadays UAVs has been widely used for large-scale surveying and mapping. Compared with traditional surveying techniques, UAV photogrammetry are more convenient and more cost-effective. UAV photogrammetry produces the DEM of the survey area with high accuracy and high spatial resolution. As a result, DEMs resulted from UAV photogrammetry can be used for a more detailed and accurate evaluation of the GDEM product. This study investigates the vertical accuracy (in terms of elevation accuracy and systematic errors) of the ASTER GDEM Version 2 dataset over a complex terrain based on UAV photogrammetry. Experimental results show that the elevation errors of ASTER GDEM2 are in normal distribution and the systematic error is quite small. The accuracy of the ASTER GDEM2 coincides well with that reported by the ASTER validation team. The accuracy in the research area is negatively correlated to both the slope of the terrain and the number of stereo observations. This study also evaluates the vertical accuracy of the up-sampled ASTER GDEM2. Experimental results show that the accuracy of the up-sampled ASTER GDEM2 data in the research area is not significantly reduced by the complexity of the terrain. The fine-grained accuracy evaluation of the ASTER GDEM2 is informative for the GDEM-supported UAV photogrammetric applications.
Kramer, Christian; Fuchs, Julian E; Liedl, Klaus R
2015-03-23
Nonadditivity in protein-ligand affinity data represents highly instructive structure-activity relationship (SAR) features that indicate structural changes and have the potential to guide rational drug design. At the same time, nonadditivity is a challenge for both basic SAR analysis as well as many ligand-based data analysis techniques such as Free-Wilson Analysis and Matched Molecular Pair analysis, since linear substituent contribution models inherently assume additivity and thus do not work in such cases. While structural causes for nonadditivity have been analyzed anecdotally, no systematic approaches to interpret and use nonadditivity prospectively have been developed yet. In this contribution, we lay the statistical framework for systematic analysis of nonadditivity in a SAR series. First, we develop a general metric to quantify nonadditivity. Then, we demonstrate the non-negligible impact of experimental uncertainty that creates apparent nonadditivity, and we introduce techniques to handle experimental uncertainty. Finally, we analyze public SAR data sets for strong nonadditivity and use recourse to the original publications and available X-ray structures to find structural explanations for the nonadditivity observed. We find that all cases of strong nonadditivity (ΔΔpKi and ΔΔpIC50 > 2.0 log units) with sufficient structural information to generate reasonable hypothesis involve changes in binding mode. With the appropriate statistical basis, nonadditivity analysis offers a variety of new attempts for various areas in computer-aided drug design, including the validation of scoring functions and free energy perturbation approaches, binding pocket classification, and novel features in SAR analysis tools.
ERIC Educational Resources Information Center
Harlen, Wynne
2005-01-01
This paper summarizes the findings of a systematic review of research on the reliability and validity of teachers' assessment used for summative purposes. In addition to the main question, the review also addressed the question "What conditions affect the reliability and validity of teachers' summative assessment?" The initial search for studies…
Nse, Odunaiya; Quinette, Louw; Okechukwu, Ogah
2015-09-01
Well developed and validated lifestyle cardiovascular disease (CVD) risk factors questionnaires is the key to obtaining accurate information to enable planning of CVD prevention program which is a necessity in developing countries. We conducted this review to assess methods and processes used for development and content validation of lifestyle CVD risk factors questionnaires and possibly develop an evidence based guideline for development and content validation of lifestyle CVD risk factors questionnaires. Relevant databases at the Stellenbosch University library were searched for studies conducted between 2008 and 2012, in English language and among humans. Using the following databases; pubmed, cinahl, psyc info and proquest. Search terms used were CVD risk factors, questionnaires, smoking, alcohol, physical activity and diet. Methods identified for development of lifestyle CVD risk factors were; review of literature either systematic or traditional, involvement of expert and /or target population using focus group discussion/interview, clinical experience of authors and deductive reasoning of authors. For validation, methods used were; the involvement of expert panel, the use of target population and factor analysis. Combination of methods produces questionnaires with good content validity and other psychometric properties which we consider good.
Religiosity and Substance Abuse: Need for Systematic Research
ERIC Educational Resources Information Center
Sharma, Manoj
2006-01-01
Religion plays a significant role in human life, yet its potential to influence health and health-related conditions is not well studied. This article cites several studies that examine the correlation between religiosity and substance abuse. This article also suggests that more systematic researches are needed to validate the correlation of…
Systematic, Cooperative Evaluation.
ERIC Educational Resources Information Center
Nassif, Paula M.
Evaluation procedures based on a systematic evaluation methodology, decision-maker validity, new measurement and design techniques, low cost, and a high level of cooperation on the part of the school staff were used in the assessment of a public school mathematics program for grades 3-8. The mathematics curriculum was organized into Spirals which…
2017-09-01
VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS by Matthew D. Bouwense...VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS 5. FUNDING NUMBERS 6. AUTHOR...unlimited. EXPERIMENTAL VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY
Gathering Validity Evidence for Surgical Simulation: A Systematic Review.
Borgersen, Nanna Jo; Naur, Therese M H; Sørensen, Stine M D; Bjerrum, Flemming; Konge, Lars; Subhi, Yousif; Thomsen, Ann Sofia S
2018-06-01
To identify current trends in the use of validity frameworks in surgical simulation, to provide an overview of the evidence behind the assessment of technical skills in all surgical specialties, and to present recommendations and guidelines for future validity studies. Validity evidence for assessment tools used in the evaluation of surgical performance is of paramount importance to ensure valid and reliable assessment of skills. We systematically reviewed the literature by searching 5 databases (PubMed, EMBASE, Web of Science, PsycINFO, and the Cochrane Library) for studies published from January 1, 2008, to July 10, 2017. We included original studies evaluating simulation-based assessments of health professionals in surgical specialties and extracted data on surgical specialty, simulator modality, participant characteristics, and the validity framework used. Data were synthesized qualitatively. We identified 498 studies with a total of 18,312 participants. Publications involving validity assessments in surgical simulation more than doubled from 2008 to 2010 (∼30 studies/year) to 2014 to 2016 (∼70 to 90 studies/year). Only 6.6% of the studies used the recommended contemporary validity framework (Messick). The majority of studies used outdated frameworks such as face validity. Significant differences were identified across surgical specialties. The evaluated assessment tools were mostly inanimate or virtual reality simulation models. An increasing number of studies have gathered validity evidence for simulation-based assessments in surgical specialties, but the use of outdated frameworks remains common. To address the current practice, this paper presents guidelines on how to use the contemporary validity framework when designing validity studies.
Caro-Bautista, Jorge; Martín-Santos, Francisco Javier; Morales-Asencio, Jose Miguel
2014-06-01
To determine the psychometric properties and theoretical grounding of instruments that evaluate self-care behaviour or barriers in people with type 2 diabetes. There are many instruments designed to evaluate self-care behaviour or barriers in this population, but knowledge about their psychometric validation processes is lacking. Systematic review. We conducted a search for psychometric or validation studies published between January 1990-December 2012. We carried out searches in Pubmed, CINAHL, PsycINFO, ProQuolid, BibliPRO and Google SCHOLAR to identify instruments that evaluated self-care behaviours or barriers to diabetes self-care. We conducted a systematic review with the following inclusion criteria: Psychometric or clinimetric validation studies that included patients with type 2 diabetes (exclusively or partially) and which analysed self-care behaviour or barriers to self-care and proxies like self-efficacy or empowerment, from a multidimensional approach. Language: Spanish or English. Two authors independently assessed the quality of the studies and extracted data using Terwee's proposed criteria: psychometrics properties, dimensionality, theoretical ground and population used for validation through each included instrument. Sixteen instruments achieved the inclusion criteria for the review. We detected important methodological flaws in many of the selected instruments. Only the Self-management Profile for Type 2 Diabetes and Problem Areas in Diabetes Scale met half of Terwee's quality criteria. There are no instruments for identifying self-care behaviours or barriers elaborated with a strong validation process. Further research should be carried out to provide patients, clinicians and researchers with valid and reliable instruments that are methodologically solid and theoretically grounded. © 2013 John Wiley & Sons Ltd.
Smith, Shannon M; Paillard, Florence; McKeown, Andrew; Burke, Laurie B; Edwards, Robert R; Katz, Nathaniel P; Papadopoulos, Elektra J; Rappaport, Bob A; Slagle, Ashley; Strain, Eric C; Wasan, Ajay D; Turk, Dennis C; Dworkin, Robert H
2015-05-01
Measurement of inappropriate medication use events (eg, abuse or misuse) in clinical trials is important in characterizing a medication's abuse potential. However, no gold standard assessment of inappropriate use events in clinical trials has been identified. In this systematic review, we examine the measurement properties (ie, content validity, cross-sectional reliability and construct validity, longitudinal construct validity, ability to detect change, and responder definitions) of instruments assessing inappropriate use of opioid and nonopioid prescription medications to identify any that meet U.S. and European regulatory agencies' rigorous standards for outcome measures in clinical trials. Sixteen published instruments were identified, most of which were not designed for the selected concept of interest and context of use. For this reason, many instruments were found to lack adequate content validity (or documentation of content validity) to evaluate current inappropriate medication use events; for example, evaluating inappropriate use across the life span rather than current use, including items that did not directly assess inappropriate use (eg, questions about anger), or failing to capture information pertinent to inappropriate use events (eg, intention and route of administration). In addition, the psychometric data across all instruments were generally limited in scope. A further limitation is the heterogeneous, nonstandardized use of inappropriate medication use terminology. These observations suggest that available instruments are not well suited for assessing current inappropriate medication use within the specific context of clinical trials. Further effort is needed to develop reliable and valid instruments to measure current inappropriate medication use events in clinical trials. This systematic review evaluates the measurement properties of inappropriate medication use (eg, abuse or misuse) instruments to determine whether any meet regulatory standards for clinical trial outcome measures to assess abuse potential. Copyright © 2015 American Pain Society. All rights reserved.
Complex Water Impact Visitor Information Validation and Qualification Sciences Experimental Complex Our the problem space. The Validation and Qualification Sciences Experimental Complex (VQSEC) at Sandia
Hypersonic Experimental and Computational Capability, Improvement and Validation. Volume 2
NASA Technical Reports Server (NTRS)
Muylaert, Jean (Editor); Kumar, Ajay (Editor); Dujarric, Christian (Editor)
1998-01-01
The results of the phase 2 effort conducted under AGARD Working Group 18 on Hypersonic Experimental and Computational Capability, Improvement and Validation are presented in this report. The first volume, published in May 1996, mainly focused on the design methodology, plans and some initial results of experiments that had been conducted to serve as validation benchmarks. The current volume presents the detailed experimental and computational data base developed during this effort.
Experimental Design and Some Threats to Experimental Validity: A Primer
ERIC Educational Resources Information Center
Skidmore, Susan
2008-01-01
Experimental designs are distinguished as the best method to respond to questions involving causality. The purpose of the present paper is to explicate the logic of experimental design and why it is so vital to questions that demand causal conclusions. In addition, types of internal and external validity threats are discussed. To emphasize the…
NASA Astrophysics Data System (ADS)
Rao, Lang; Cai, Bo; Yu, Xiao-Lei; Guo, Shi-Shang; Liu, Wei; Zhao, Xing-Zhong
2015-05-01
3D microelectrodes are one-step fabricated into a microfluidic droplet separator by filling conductive silver paste into PDMS microchambers. The advantages of 3D silver paste electrodes in promoting droplet sorting accuracy are systematically demonstrated by theoretical calculation, numerical simulation and experimental validation. The employment of 3D electrodes also helps to decrease the droplet sorting voltage, guaranteeing that cells encapsulated in droplets undergo chip-based sorting processes are at better metabolic status for further potential cellular assays. At last, target droplet containing single cell are selectively sorted out from others by an appropriate electric pulse. This method provides a simple and inexpensive alternative to fabricate 3D electrodes, and it is expected our 3D electrode-integrated microfluidic droplet separator platform can be widely used in single cell operation and analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, XF; Zhao, X; Huang, K
A high-fidelity two-dimensional axial symmetrical multi-physics model is described in this paper as an effort to simulate the cycle performance of a recently discovered solid oxide metal-air redox battery (SOMARB). The model collectively considers mass transport, charge transfer and chemical redox cycle kinetics occurring across the components of the battery, and is validated by experimental data obtained from independent research. In particular, the redox kinetics at the energy storage unit is well represented by Johnson-Mehl-Avrami-Kolmogorov (JIVIAK) and Shrinking Core models. The results explicitly show that the reduction of Fe3O4 during the charging cycle limits the overall performance. Distributions of electrodemore » potential, overpotential, Nernst potential, and H-2/H2O-concentration across various components of the battery are also systematically investigated. (C) 2015 Elsevier B.V. All rights reserved.« less
Lu, Liang-Xing; Wang, Ying-Min; Srinivasan, Bharathi Madurai; Asbahi, Mohamed; Yang, Joel K. W.; Zhang, Yong-Wei
2016-01-01
We perform systematic two-dimensional energetic analysis to study the stability of various nanostructures formed by dewetting solid films deposited on patterned substrates. Our analytical results show that by controlling system parameters such as the substrate surface pattern, film thickness and wetting angle, a variety of equilibrium nanostructures can be obtained. Phase diagrams are presented to show the complex relations between these system parameters and various nanostructure morphologies. We further carry out both phase field simulations and dewetting experiments to validate the analytically derived phase diagrams. Good agreements between the results from our energetic analyses and those from our phase field simulations and experiments verify our analysis. Hence, the phase diagrams presented here provide guidelines for using solid-state dewetting as a tool to achieve various nanostructures. PMID:27580943
NASA Astrophysics Data System (ADS)
Song, Maojiang; Yang, Fei; Liu, Liping; Su, Caixia
2018-02-01
Due to the important pharmaceutical activities of benzimidazole derivatives, the differences between 2-(2‧-pyridyl)benzimidazole and 2-(4‧-pyridyl)benzimidazole were researched by terahertz time-domain spectroscopy and density functional theory systematically. Although the only difference between their molecular configurations is the different arrangement of nitrogen on pyridine ring, 2PBI and 4PBI have large differences in their experimental absorption spectra in the range of 0.2-2.5 THz, such as the amount, amplitude and frequency position of absorption peaks. The validity of these results was confirmed by the theoretical results simulated using density functional theory. The possible reasons of these differences originate from the different dihedral angles between benzimidazole ring and pyridine ring and the different hydrogen-bonding interactions within crystal cell.
NASA Technical Reports Server (NTRS)
Gupta, Vipul; Hochhalter, Jacob; Yamakov, Vesselin; Scott, Willard; Spear, Ashley; Smith, Stephen; Glaessgen, Edward
2013-01-01
A systematic study of crack tip interaction with grain boundaries is critical for improvement of multiscale modeling of microstructurally-sensitive fatigue crack propagation and for the computationally-assisted design of more durable materials. In this study, single, bi- and large-grain multi-crystal specimens of an aluminum-copper alloy are fabricated, characterized using electron backscattered diffraction (EBSD), and deformed under tensile loading and nano-indentation. 2D image correlation (IC) in an environmental scanning electron microscope (ESEM) is used to measure displacements near crack tips, grain boundaries and within grain interiors. The role of grain boundaries on slip transfer is examined using nano-indentation in combination with high-resolution EBSD. The use of detailed IC and EBSD-based experiments are discussed as they relate to crystal-plasticity finite element (CPFE) model calibration and validation.
Combinatorial Histone Acetylation Patterns Are Generated by Motif-Specific Reactions.
Blasi, Thomas; Feller, Christian; Feigelman, Justin; Hasenauer, Jan; Imhof, Axel; Theis, Fabian J; Becker, Peter B; Marr, Carsten
2016-01-27
Post-translational modifications (PTMs) are pivotal to cellular information processing, but how combinatorial PTM patterns ("motifs") are set remains elusive. We develop a computational framework, which we provide as open source code, to investigate the design principles generating the combinatorial acetylation patterns on histone H4 in Drosophila melanogaster. We find that models assuming purely unspecific or lysine site-specific acetylation rates were insufficient to explain the experimentally determined motif abundances. Rather, these abundances were best described by an ensemble of models with acetylation rates that were specific to motifs. The model ensemble converged upon four acetylation pathways; we validated three of these using independent data from a systematic enzyme depletion study. Our findings suggest that histone acetylation patterns originate through specific pathways involving motif-specific acetylation activity. Copyright © 2016 Elsevier Inc. All rights reserved.
Mapping the pathways of resistance to targeted therapies
Wood, Kris C.
2015-01-01
Resistance substantially limits the depth and duration of clinical responses to targeted anticancer therapies. Through the use of complementary experimental approaches, investigators have revealed that cancer cells can achieve resistance through adaptation or selection driven by specific genetic, epigenetic, or microenvironmental alterations. Ultimately, these diverse alterations often lead to the activation of signaling pathways that, when co-opted, enable cancer cells to survive drug treatments. Recently developed methods enable the direct and scalable identification of the signaling pathways capable of driving resistance in specific contexts. Using these methods, novel pathways of resistance to clinically approved drugs have been identified and validated. By combining systematic resistance pathway mapping methods with studies revealing biomarkers of specific resistance pathways and pharmacological approaches to block these pathways, it may be possible to rationally construct drug combinations that yield more penetrant and lasting responses in patients. PMID:26392071
Covariant Conservation Laws and the Spin Hall Effect in Dirac-Rashba Systems
NASA Astrophysics Data System (ADS)
Milletarı, Mirco; Offidani, Manuel; Ferreira, Aires; Raimondi, Roberto
2017-12-01
We present a theoretical analysis of two-dimensional Dirac-Rashba systems in the presence of disorder and external perturbations. We unveil a set of exact symmetry relations (Ward identities) that impose strong constraints on the spin dynamics of Dirac fermions subject to proximity-induced interactions. This allows us to demonstrate that an arbitrary dilute concentration of scalar impurities results in the total suppression of nonequilibrium spin Hall currents when only Rashba spin-orbit coupling is present. Remarkably, a finite spin Hall conductivity is restored when the minimal Dirac-Rashba model is supplemented with a spin-valley interaction. The Ward identities provide a systematic way to predict the emergence of the spin Hall effect in a wider class of Dirac-Rashba systems of experimental relevance and represent an important benchmark for testing the validity of numerical methodologies.
Toxcast and the Use of Human Relevant In Vitro Exposures ...
The path for incorporating new approach methods and technologies into quantitative chemical risk assessment poses a diverse set of scientific challenges. These challenges include sufficient coverage of toxicological mechanisms to meaningfully interpret negative test results, development of increasingly relevant test systems, computational modeling to integrate experimental data, putting results in a dose and exposure context, characterizing uncertainty, and efficient validation of the test systems and computational models. The presentation will cover progress at the U.S. EPA in systematically addressing each of these challenges and delivering more human-relevant risk-based assessments. This abstract does not necessarily reflect U.S. EPA policy. Presentation at the British Toxicological Society Annual Congress on ToxCast and the Use of Human Relevant In Vitro Exposures: Incorporating high-throughput exposure and toxicity testing data for 21st century risk assessments .
Design and validation of a neuroprosthesis for the treatment of upper limb tremor.
Gallego, J A; Rocon, E; Belda-Lois, J M; Koutsou, A D; Mena, S; Castillo, A; Pons, J L
2013-01-01
Pathological tremor is the most prevalent movement disorder. In spite of the existence of various treatments for it, tremor poses a functional problem to a large proportion of patients. This paper presents the design and implementation of a novel neuroprosthesis for tremor management. The paper starts by reviewing a series of design criteria that were established after analyzing users needs and the expected functionality of the system. Then, it summarizes the design of the neuroprosthesis, which was built to meet the criteria defined previously. Experimental results with a representative group of 12 patients show that the neuroprosthesis provided significant (p < 0.001) and systematic tremor attenuation (in average 52.33 ± 25.48 %), and encourage its functional evaluation as a potential new treatment for tremor in a large cohort of patients.
NASA Astrophysics Data System (ADS)
Petersen, D.; Naveed, P.; Ragheb, A.; Niedieker, D.; El-Mashtoly, S. F.; Brechmann, T.; Kötting, C.; Schmiegel, W. H.; Freier, E.; Pox, C.; Gerwert, K.
2017-06-01
Endoscopy plays a major role in early recognition of cancer which is not externally accessible and therewith in increasing the survival rate. Raman spectroscopic fiber-optical approaches can help to decrease the impact on the patient, increase objectivity in tissue characterization, reduce expenses and provide a significant time advantage in endoscopy. In gastroenterology an early recognition of malign and precursor lesions is relevant. Instantaneous and precise differentiation between adenomas as precursor lesions for cancer and hyperplastic polyps on the one hand and between high and low-risk alterations on the other hand is important. Raman fiber-optical measurements of colon biopsy samples taken during colonoscopy were carried out during a clinical study, and samples of adenocarcinoma (22), tubular adenomas (141), hyperplastic polyps (79) and normal tissue (101) from 151 patients were analyzed. This allows us to focus on the bioinformatic analysis and to set stage for Raman endoscopic measurements. Since spectral differences between normal and cancerous biopsy samples are small, special care has to be taken in data analysis. Using a leave-one-patient-out cross-validation scheme, three different outlier identification methods were investigated to decrease the influence of systematic errors, like a residual risk in misplacement of the sample and spectral dilution of marker bands (esp. cancerous tissue) and therewith optimize the experimental design. Furthermore other validations methods like leave-one-sample-out and leave-one-spectrum-out cross-validation schemes were compared with leave-one-patient-out cross-validation. High-risk lesions were differentiated from low-risk lesions with a sensitivity of 79%, specificity of 74% and an accuracy of 77%, cancer and normal tissue with a sensitivity of 79%, specificity of 83% and an accuracy of 81%. Additionally applied outlier identification enabled us to improve the recognition of neoplastic biopsy samples.
Hounsome, J; Whittington, R; Brown, A; Greenhill, B; McGuire, J
2018-01-01
While structured professional judgement approaches to assessing and managing the risk of violence have been extensively examined in mental health/forensic settings, the application of the findings to people with an intellectual disability is less extensively researched and reviewed. This review aimed to assess whether risk assessment tools have adequate predictive validity for violence in adults with an intellectual disability. Standard systematic review methodology was used to identify and synthesize appropriate studies. A total of 14 studies were identified as meeting the inclusion criteria. These studies assessed the predictive validity of 18 different risk assessment tools, mainly in forensic settings. All studies concluded that the tools assessed were successful in predicting violence. Studies were generally of a high quality. There is good quality evidence that risk assessment tools are valid for people with intellectual disability who offend but further research is required to validate tools for use with people with intellectual disability who offend. © 2016 John Wiley & Sons Ltd.
Sjögren, P; Ordell, S; Halling, A
2003-12-01
The aim was to describe and systematically review the methodology and reporting of validation in publications describing epidemiological registration methods for dental caries. BASIC RESEARCH METHODOLOGY: Literature searches were conducted in six scientific databases. All publications fulfilling the predetermined inclusion criteria were assessed for methodology and reporting of validation using a checklist including items described previously as well as new items. The frequency of endorsement of the assessed items was analysed. Moreover, the type and strength of evidence, was evaluated. Reporting of predetermined items relating to methodology of validation and the frequency of endorsement of the assessed items were of primary interest. Initially 588 publications were located. 74 eligible publications were obtained, 23 of which fulfilled the inclusion criteria and remained throughout the analyses. A majority of the studies reported the methodology of validation. The reported methodology of validation was generally inadequate, according to the recommendations of evidence-based medicine. The frequencies of reporting the assessed items (frequencies of endorsement) ranged from four to 84 per cent. A majority of the publications contributed to a low strength of evidence. There seems to be a need to improve the methodology and the reporting of validation in publications describing professionally registered caries epidemiology. Four of the items assessed in this study are potentially discriminative for quality assessments of reported validation.
Sensorimotor Incongruence in People with Musculoskeletal Pain: A Systematic Review.
Don, Sanneke; Voogt, Lennard; Meeus, Mira; De Kooning, Margot; Nijs, Jo
2017-01-01
Musculoskeletal pain has major public health implications, but the theoretical framework remains unclear. It is hypothesized that sensorimotor incongruence (SMI) might be a cause of long-lasting pain sensations in people with chronic musculoskeletal pain. Research data about experimental SMI triggering pain has been equivocal, making the relation between SMI and pain elusive. The aim of this study was to systematically review the studies on experimental SMI in people with musculoskeletal pain and healthy individuals. Preferred reporting items for systematic reviews and meta-analyses guidelines were followed. A systematic literature search was conducted using several databases until January 2015. To identify relevant articles, keywords regarding musculoskeletal pain or healthy subjects and the sensory or the motor system were combined. Study characteristics were extracted. Risk of bias was assessed using the Dutch Institute for Healthcare Improvement (CBO) checklist for randomized controlled trials, and level of evidence was judged. Eight cross-over studies met the inclusion criteria. The methodological quality of the studies varied, and populations were heterogeneous. In populations with musculoskeletal pain, outcomes of sensory disturbances and pain were higher during all experimental conditions compared to baseline conditions. In healthy subjects, pain reports during experimental SMI were very low or did not occur at all. Based on the current evidence and despite some methodological issues, there is no evidence that experimental SMI triggers pain in healthy individuals and in people with chronic musculoskeletal pain. However, people with chronic musculoskeletal pain report more sensory disturbances and pain during the experimental conditions, indicating that visual manipulation influences pain outcomes in this population. © 2016 World Institute of Pain.
SKA weak lensing - III. Added value of multiwavelength synergies for the mitigation of systematics
NASA Astrophysics Data System (ADS)
Camera, Stefano; Harrison, Ian; Bonaldi, Anna; Brown, Michael L.
2017-02-01
In this third paper of a series on radio weak lensing for cosmology with the Square Kilometre Array, we scrutinize synergies between cosmic shear measurements in the radio and optical/near-infrared (IR) bands for mitigating systematic effects. We focus on three main classes of systematics: (I) experimental systematic errors in the observed shear; (II) signal contamination by intrinsic alignments and (III) systematic effects due to an incorrect modelling of non-linear scales. First, we show that a comprehensive, multiwavelength analysis provides a self-calibration method for experimental systematic effects, only implying <50 per cent increment on the errors on cosmological parameters. We also illustrate how the cross-correlation between radio and optical/near-IR surveys alone is able to remove residual systematics with variance as large as 10-5, I.e. the same order of magnitude of the cosmological signal. This also opens the possibility of using such a cross-correlation as a means to detect unknown experimental systematics. Secondly, we demonstrate that, thanks to polarization information, radio weak lensing surveys will be able to mitigate contamination by intrinsic alignments, in a way similar but fully complementary to available self-calibration methods based on position-shear correlations. Lastly, we illustrate how radio weak lensing experiments, reaching higher redshifts than those accessible to optical surveys, will probe dark energy and the growth of cosmic structures in regimes less contaminated by non-linearities in the matter perturbations. For instance, the higher redshift bins of radio catalogues peak at z ≃ 0.8-1, whereas their optical/near-IR counterparts are limited to z ≲ 0.5-0.7. This translates into having a cosmological signal 2-5 times less contaminated by non-linear perturbations.
Systematic review found AMSTAR, but not R(evised)-AMSTAR, to have good measurement properties.
Pieper, Dawid; Buechter, Roland Brian; Li, Lun; Prediger, Barbara; Eikermann, Michaela
2015-05-01
To summarize all available evidence on measurement properties in terms of reliability, validity, and feasibility of the Assessment of Multiple Systematic Reviews (AMSTAR) tool, including R(evised)-AMSTAR. MEDLINE, EMBASE, Psycinfo, and CINAHL were searched for studies containing information on measurement properties of the tools in October 2013. We extracted data on study characteristics and measurement properties. These data were analyzed following measurement criteria. We included 13 studies, four of them were labeled as validation studies. Nine articles dealt with AMSTAR, two articles dealt with R-AMSTAR, and one article dealt with both instruments. In terms of interrater reliability, most items showed a substantial agreement (>0.6). The median intraclass correlation coefficient (ICC) for the overall score of AMSTAR was 0.83 (range 0.60-0.98), indicating a high agreement. In terms of validity, ICCs were very high with all but one ICC lower than 0.8 when the AMSTAR score was compared with scores from other tools. Scoring AMSTAR takes between 10 and 20 minutes. AMSTAR seems to be reliable and valid. Further investigations for systematic reviews of other study designs than randomized controlled trials are needed. R-AMSTAR should be further investigated as evidence for its use is limited and its measurement properties have not been studied sufficiently. In general, test-retest reliability should be investigated in future studies. Copyright © 2015 Elsevier Inc. All rights reserved.
Physical activity questionnaires for youth: a systematic review of measurement properties.
Chinapaw, Mai J M; Mokkink, Lidwine B; van Poppel, Mireille N M; van Mechelen, Willem; Terwee, Caroline B
2010-07-01
Because of the diversity in available questionnaires, it is not easy for researchers to decide which instrument is most suitable for his or her specific demands. Therefore, we systematically summarized and appraised studies examining measurement properties of self-administered and proxy-reported physical activity (PA) questionnaires in youth. Literature was identified through searching electronic databases (PubMed, EMBASE using 'EMBASE only' and SportDiscus) until May 2009. Studies were included if they reported on the measurement properties of self-administered and proxy-reported PA questionnaires in youth (mean age <18 years) and were published in the English language. Methodological quality and results of included studies was appraised using a standardized checklist (qualitative attributes and measurement properties of PA questionnaires [QAPAQ]). We included 54 manuscripts examining 61 versions of questionnaires. None of the included questionnaires showed both acceptable reliability and validity. Only seven questionnaires received a positive rating for reliability. Reported validity varied, with correlations between PA questionnaires and accelerometers ranging from very low to high (previous day PA recall: correlation coefficient [r] = 0.77). In general, PA questionnaires for adolescents correlated better with accelerometer scores than did those for children. From this systematic review, we conclude that no questionnaires were available with both acceptable reliability and validity. Considerably more high-quality research is required to examine the validity and reliability of promising PA questionnaires for youth.
Taylor, Lisa; Poland, Fiona; Harrison, Peter; Stephenson, Richard
2011-01-01
To evaluate a systematic treatment programme developed by the researcher that targeted aspects of visual functioning affected by visual field deficits following stroke. The study design was a non-equivalent control (conventional) group pretest-posttest quasi-experimental feasibility design, using multisite data collection methods at specified stages. The study was undertaken within three acute hospital settings as outpatient follow-up sessions. Individuals who had visual field deficits three months post stroke were studied. A treatment group received routine occupational therapy and an experimental group received, in addition, a systematic treatment programme. The treatment phase of both groups lasted six weeks. The Nottingham Adjustment Scale, a measure developed specifically for visual impairment, was used as the primary outcome measure. The change in Nottingham Adjustment Scale score was compared between the experimental (n = 7) and conventional (n = 8) treatment groups using the Wilcoxon signed ranks test. The result of Z = -2.028 (P = 0.043) showed that there was a statistically significant difference between the change in Nottingham Adjustment Scale score between both groups. The introduction of the systematic treatment programme resulted in a statistically significant change in the scores of the Nottingham Adjustment Scale.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Comparison Study for... experimental conditions for the validation study and subsequent use during decontamination. The following...
Code of Federal Regulations, 2010 CFR
2010-07-01
... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Comparison Study for... experimental conditions for the validation study and subsequent use during decontamination. The following...
ERIC Educational Resources Information Center
Fromm, Germán; Hallinger, Philip; Volante, Paulo; Wang, Wen Chung
2017-01-01
The purposes of this study were to report on a systematic approach to validating a Spanish version of the Principal Instructional Management Rating Scale and then to apply the scale in a cross-national comparison of principal instructional leadership. The study yielded a validated Spanish language version of the PIMRS Teacher Form and offers a…
Interventions aimed at improving the nursing work environment: a systematic review
2010-01-01
Background Nursing work environments (NWEs) in Canada and other Western countries have increasingly received attention following years of restructuring and reported high workloads, high absenteeism, and shortages of nursing staff. Despite numerous efforts to improve NWEs, little is known about the effectiveness of interventions to improve NWEs. The aim of this study was to review systematically the scientific literature on implemented interventions aimed at improving the NWE and their effectiveness. Methods An online search of the databases CINAHL, Medline, Scopus, ABI, Academic Search Complete, HEALTHstar, ERIC, Psychinfo, and Embase, and a manual search of Emerald and Longwoods was conducted. (Quasi-) experimental studies with pre/post measures of interventions aimed at improving the NWE, study populations of nurses, and quantitative outcome measures of the nursing work environment were required for inclusion. Each study was assessed for methodological strength using a quality assessment and validity tool for intervention studies. A taxonomy of NWE characteristics was developed that would allow us to identify on which part of the NWE an intervention targeted for improvement, after which the effects of the interventions were examined. Results Over 9,000 titles and abstracts were screened. Eleven controlled intervention studies met the inclusion criteria, of which eight used a quasi-experimental design and three an experimental design. In total, nine different interventions were reported in the included studies. The most effective interventions at improving the NWE were: primary nursing (two studies), the educational toolbox (one study), the individualized care and clinical supervision (one study), and the violence prevention intervention (one study). Conclusions Little is known about the effectiveness of interventions aimed at improving the NWE, and published studies on this topic show weaknesses in their design. To advance the field, we recommend that investigators use controlled studies with pre/post measures to evaluate interventions that are aimed at improving the NWE. Thereby, more evidence-based knowledge about the implementation of interventions will become available for healthcare leaders to use in rebuilding nursing work environments. PMID:20423492
Hazell, Lorna; Raschi, Emanuel; De Ponti, Fabrizio; Thomas, Simon H L; Salvo, Francesco; Ahlberg Helgee, Ernst; Boyer, Scott; Sturkenboom, Miriam; Shakir, Saad
2017-05-01
A systematic review was performed to categorize the hERG (human ether-a-go-go-related gene) liability of antihistamines, antipsychotics, and anti-infectives and to compare it with current clinical risk of torsade de pointes (TdP). Eligible studies were hERG assays reporting half-minimal inhibitory concentrations (IC50). A "hERG safety margin" was calculated from the IC50 divided by the peak human plasma concentration (free C max ). A margin below 30 defined hERG liability. Each drug was assigned an "uncertainty score" based on volume, consistency, precision, and internal and external validity of evidence. The hERG liability was compared to existing knowledge on TdP risk (www.credibledrugs.org). Of 1828 studies, 82 were eligible, allowing calculation of safety margins for 61 drugs. Thirty-one drugs (51%) had evidence of hERG liability including 6 with no previous mention of TdP risk (eg, desloratadine, lopinavir). Conversely, 16 drugs (26%) had no evidence of hERG liability including 6 with known, or at least conditional or possible, TdP risk (eg, chlorpromazine, sulpiride). The main sources of uncertainty were the validity of the experimental conditions used (antihistamines and antipsychotics) and nonuse of reference compounds (anti-infectives). In summary, hERG liability was categorized for 3 widely used drug classes, incorporating a qualitative assessment of the strength of available evidence. Some concordance with TdP risk was observed, although several drugs had hERG liability without evidence of clinical risk and vice versa. This may be due to gaps in clinical evidence, limitations of hERG/C max data, or other patient/drug-specific factors that contribute to real-life TdP risk. © 2016, The American College of Clinical Pharmacology.
Craike, M; Hill, B; Gaskin, C J; Skouteris, H
2017-03-01
Physical activity (PA) during pregnancy has significant health benefits for the mother and her child; however, many women reduce their activity levels during pregnancy and most are not sufficiently active. Given the important health benefits of PA during pregnancy, evidence that supports research translation is vital. To determine the extent to which physical activity interventions for pregnant women report on internal and external validity factors using the RE-AIM framework (reach, efficacy/effectiveness, adoption, implementation, and maintenance). Ten databases were searched up to 1 June 2015. Eligible published papers and unpublished/grey literature were identified using relevant search terms. Studies had to report on physical activity interventions during pregnancy, including measures of physical activity during pregnancy at baseline and at least one point post-intervention. Randomised controlled trials and quasi-experimental studies that had a comparator group were included. Reporting of RE-AIM dimensions were summarised and synthesised across studies. The reach (72.1%) and efficacy/effectiveness (71.8%) dimensions were commonly reported; however, the implementation (28.9%) and adoption (23.2%) dimensions were less commonly reported and no studies reported on maintenance. This review highlights the under-reporting of issues of contextual factors in studies of physical activity during pregnancy. The translation of physical activity interventions during pregnancy could be improved through reporting of representativeness of participants, clearer reporting of outcomes, more detail on the setting and staff who deliver interventions, costing of interventions and the inclusion of process evaluations and qualitative data. The systematic review highlights the under-reporting of contextual factors in studies of physical activity during pregnancy. © 2016 Royal College of Obstetricians and Gynaecologists.
Experimental Validation Techniques for the Heleeos Off-Axis Laser Propagation Model
2010-03-01
EXPERIMENTAL VALIDATION TECHNIQUES FOR THE HELEEOS OFF-AXIS LASER PROPAGATION MODEL THESIS John Haiducek, 1st Lt, USAF AFIT/GAP/ENP/10-M07 DEPARTMENT...Department of Defense, or the United States Government. AFIT/GAP/ENP/10-M07 EXPERIMENTAL VALIDATION TECHNIQUES FOR THE HELEEOS OFF-AXIS LASER ...BS, Physics 1st Lt, USAF March 2010 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT/GAP/ENP/10-M07 Abstract The High Energy Laser End-to-End
DiFranza, Joseph; Ursprung, W W Sanouri; Lauzon, Béatrice; Bancej, Christina; Wellman, Robert J; Ziedonis, Douglas; Kim, Sun S; Gervais, André; Meltzer, Bruce; McKay, Colleen E; O'Loughlin, Jennifer; Okoli, Chizimuzo T C; Fortuna, Lisa R; Tremblay, Michèle
2010-05-01
The Diagnostic and Statistical Manual diagnostic criteria for nicotine dependence (DSM-ND) are based on the proposition that dependence is a syndrome that can be diagnosed only when a minimum of 3 of the 7 proscribed features are present. The DSM-ND criteria are an accepted research measure, but the validity of these criteria has not been subjected to a systematic evaluation. To systematically review evidence of validity and reliability for the DSM-ND criteria, a literature search was conducted of 16 national and international databases. Each article with original data was independently reviewed by two or more reviewers. In total, 380 potentially relevant articles were examined and 169 were reviewed in depth. The DSM-ND criteria have seen wide use in research settings, but sensitivity and specificity are well below the accepted standards for clinical applications. Predictive validity is generally poor. The 7 DSM-ND criteria are regarded as having face validity, but no data support a 3-symptom ND diagnostic threshold, or a 4-symptom withdrawal syndrome threshold. The DSM incorrectly states that daily smoking is a prerequisite for withdrawal symptoms. The DSM shows poor to modest concurrence with all other measures of nicotine dependence, smoking behaviors and biological measures of tobacco use. The data support the DSM-ND criteria as a valid measure of nicotine dependence severity for research applications. However, the data do not support the central premise of a 3-symptom diagnostic threshold, and no data establish that the DSM-ND criteria provide an accurate diagnosis of nicotine dependence. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Rosneck, James S; Hughes, Joel; Gunstad, John; Josephson, Richard; Noe, Donald A; Waechter, Donna
2014-01-01
This article describes the systematic construction and psychometric analysis of a knowledge assessment instrument for phase II cardiac rehabilitation (CR) patients measuring risk modification disease management knowledge and behavioral outcomes derived from national standards relevant to secondary prevention and management of cardiovascular disease. First, using adult curriculum based on disease-specific learning outcomes and competencies, a systematic test item development process was completed by clinical staff. Second, a panel of educational and clinical experts used an iterative process to identify test content domain and arrive at consensus in selecting items meeting criteria. Third, the resulting 31-question instrument, the Cardiac Knowledge Assessment Tool (CKAT), was piloted in CR patients to ensure use of application. Validity and reliability analyses were performed on 3638 adults before test administrations with additional focused analyses on 1999 individuals completing both pretreatment and posttreatment administrations within 6 months. Evidence of CKAT content validity was substantiated, with 85% agreement among content experts. Evidence of construct validity was demonstrated via factor analysis identifying key underlying factors. Estimates of internal consistency, for example, Cronbach's α = .852 and Spearman-Brown split-half reliability = 0.817 on pretesting, support test reliability. Item analysis, using point biserial correlation, measured relationships between performance on single items and total score (P < .01). Analyses using item difficulty and item discrimination indices further verified item stability and validity of the CKAT. A knowledge instrument specifically designed for an adult CR population was systematically developed and tested in a large representative patient population, satisfying psychometric parameters, including validity and reliability.
Validating Performance Level Descriptors (PLDs) for the AP® Environmental Science Exam
ERIC Educational Resources Information Center
Reshetar, Rosemary; Kaliski, Pamela; Chajewski, Michael; Lionberger, Karen
2012-01-01
This presentation summarizes a pilot study conducted after the May 2011 administration of the AP Environmental Science Exam. The study used analytical methods based on scaled anchoring as input to a Performance Level Descriptor validation process that solicited systematic input from subject matter experts.
Uncertainty aggregation and reduction in structure-material performance prediction
NASA Astrophysics Data System (ADS)
Hu, Zhen; Mahadevan, Sankaran; Ao, Dan
2018-02-01
An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.
Bakire, Serge; Yang, Xinya; Ma, Guangcai; Wei, Xiaoxuan; Yu, Haiying; Chen, Jianrong; Lin, Hongjun
2018-01-01
Organic chemicals in the aquatic ecosystem may inhibit algae growth and subsequently lead to the decline of primary productivity. Growth inhibition tests are required for ecotoxicological assessments for regulatory purposes. In silico study is playing an important role in replacing or reducing animal tests and decreasing experimental expense due to its efficiency. In this work, a series of theoretical models was developed for predicting algal growth inhibition (log EC 50 ) after 72 h exposure to diverse chemicals. In total 348 organic compounds were classified into five modes of toxic action using the Verhaar Scheme. Each model was established by using molecular descriptors that characterize electronic and structural properties. The external validation and leave-one-out cross validation proved the statistical robustness of the derived models. Thus they can be used to predict log EC 50 values of chemicals that lack authorized algal growth inhibition values (72 h). This work systematically studied algal growth inhibition according to toxic modes and the developed model suite covers all five toxic modes. The outcome of this research will promote toxic mechanism analysis and be made applicable to structural diversity. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Qin, Z.; Zhao, J. M.; Liu, L. H.
2018-05-01
The level energies of diatomic molecules calculated by the frequently used Dunham expansion will become less accurate for high-lying vibrational and rotational levels. In this paper, the potential curves for the lower-lying electronic states with accurate spectroscopic constants are reconstructed using the Rydberg-Klein-Rees (RKR) method, which are extrapolated to the dissociation limits by fitting of the theoretical potentials, and the rest of the potential curves are obtained from the ab-initio results in the literature. Solving the rotational dependence of the radial Schrödinger equation over the obtained potential curves, we determine the rovibrational level energies, which are then used to calculate the equilibrium and non-equilibrium thermodynamic properties of N2, N2+, NO, O2, CN, C2, CO and CO+. The partition functions and the specific heats are systematically validated by available data in the literature. Finally, we calculate the radiative source strengths of diatomic molecules in thermodynamic equilibrium, which agree well with the available values in the literature. The spectral radiative intensities for some diatomic molecules in thermodynamic non-equilibrium are calculated and validated by available experimental data.
User-Assisted Store Recycling for Dynamic Task Graph Schedulers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurt, Mehmet Can; Krishnamoorthy, Sriram; Agrawal, Gagan
The emergence of the multi-core era has led to increased interest in designing effective yet practical parallel programming models. Models based on task graphs that operate on single-assignment data are attractive in several ways: they can support dynamic applications and precisely represent the available concurrency. However, they also require nuanced algorithms for scheduling and memory management for efficient execution. In this paper, we consider memory-efficient dynamic scheduling of task graphs. Specifically, we present a novel approach for dynamically recycling the memory locations assigned to data items as they are produced by tasks. We develop algorithms to identify memory-efficient store recyclingmore » functions by systematically evaluating the validity of a set of (user-provided or automatically generated) alternatives. Because recycling function can be input data-dependent, we have also developed support for continued correct execution of a task graph in the presence of a potentially incorrect store recycling function. Experimental evaluation demonstrates that our approach to automatic store recycling incurs little to no overheads, achieves memory usage comparable to the best manually derived solutions, often produces recycling functions valid across problem sizes and input parameters, and efficiently recovers from an incorrect choice of store recycling functions.« less
Penocchio, Emanuele; Piccardo, Matteo; Barone, Vincenzo
2015-10-13
The B2PLYP double hybrid functional, coupled with the correlation-consistent triple-ζ cc-pVTZ (VTZ) basis set, has been validated in the framework of the semiexperimental (SE) approach for deriving accurate equilibrium structures of molecules containing up to 15 atoms. A systematic comparison between new B2PLYP/VTZ results and several equilibrium SE structures previously determined at other levels, in particular B3LYP/SNSD and CCSD(T) with various basis sets, has put in evidence the accuracy and the remarkable stability of such model chemistry for both equilibrium structures and vibrational corrections. New SE equilibrium structures for phenylacetylene, pyruvic acid, peroxyformic acid, and phenyl radical are discussed and compared with literature data. Particular attention has been devoted to the discussion of systems for which lack of sufficient experimental data prevents a complete SE determination. In order to obtain an accurate equilibrium SE structure for these situations, the so-called templating molecule approach is discussed and generalized with respect to our previous work. Important applications are those involving biological building blocks, like uracil and thiouracil. In addition, for more general situations the linear regression approach has been proposed and validated.
A large-scale benchmark of gene prioritization methods.
Guala, Dimitri; Sonnhammer, Erik L L
2017-04-21
In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.
A Comparison of Systematic Screening Tools for Emotional and Behavioral Disorders: A Replication
ERIC Educational Resources Information Center
Lane, Kathleen Lynne; Kalberg, Jemma Robertson; Lambert, E. Warren; Crnobori, Mary; Bruhn, Allison Leigh
2010-01-01
In this article, the authors examine the psychometric properties of the Student Risk Screening Scale (SRSS), including evaluating the concurrent validity of the SRSS to predict results from the Systematic Screening for Behavior Disorders (SSBD) when used to detect school children with externalizing or internalizing behavior concerns at three…
ERIC Educational Resources Information Center
Cassella, Megan Duffy; Sidener, Tina M.; Sidener, David W.; Progar, Patrick R.
2011-01-01
This study systematically replicated and extended previous research on response interruption and redirection (RIRD) by assessing instructed responses of a different topography than the target behavior, percentage of session spent in treatment, generalization of behavior reduction, and social validity of the intervention. Results showed that RIRD…
Certification of highly complex safety-related systems.
Reinert, D; Schaefer, M
1999-01-01
The BIA has now 15 years of experience with the certification of complex electronic systems for safety-related applications in the machinery sector. Using the example of machining centres this presentation will show the systematic procedure for verifying and validating control systems using Application Specific Integrated Circuits (ASICs) and microcomputers for safety functions. One section will describe the control structure of machining centres with control systems using "integrated safety." A diverse redundant architecture combined with crossmonitoring and forced dynamization is explained. In the main section the steps of the systematic certification procedure are explained showing some results of the certification of drilling machines. Specification reviews, design reviews with test case specification, statistical analysis, and walk-throughs are the analytical measures in the testing process. Systematic tests based on the test case specification, Electro Magnetic Interference (EMI), and environmental testing, and site acceptance tests on the machines are the testing measures for validation. A complex software driven system is always undergoing modification. Most of the changes are not safety-relevant but this has to be proven. A systematic procedure for certifying software modifications is presented in the last section of the paper.
Winters, Bradford D; Bharmal, Aamir; Wilson, Renee F; Zhang, Allen; Engineer, Lilly; Defoe, Deidre; Bass, Eric B; Dy, Sydney; Pronovost, Peter J
2016-12-01
The Agency for Health Care Research and Quality Patient Safety Indicators (PSIs) and Centers for Medicare and Medicaid Services Hospital-acquired Conditions (HACs) are increasingly being used for pay-for-performance and public reporting despite concerns over their validity. Given the potential for these measures to misinform patients, misclassify hospitals, and misapply financial and reputational harm to hospitals, these need to be rigorously evaluated. We performed a systematic review and meta-analysis to assess PSI and HAC measure validity. We searched MEDLINE and the gray literature from January 1, 1990 through January 14, 2015 for studies that addressed the validity of the HAC measures and PSIs. Secondary outcomes included the effects of present on admission (POA) modifiers, and the most common reasons for discrepancies. We developed pooled results for measures evaluated by ≥3 studies. We propose a threshold of 80% for positive predictive value or sensitivity for pay-for-performance and public reporting suitability. Only 5 measures, Iatrogenic Pneumothorax (PSI 6/HAC 17), Central Line-associated Bloodstream Infections (PSI 7), Postoperative hemorrhage/hematoma (PSI 9), Postoperative deep vein thrombosis/pulmonary embolus (PSI 12), and Accidental Puncture/Laceration (PSI 15), had sufficient data for pooled meta-analysis. Only PSI 15 (Accidental Puncture and Laceration) met our proposed threshold for validity (positive predictive value only) but this result was weakened by considerable heterogeneity. Coding errors were the most common reasons for discrepancies between medical record review and administrative databases. POA modifiers may improve the validity of some measures. This systematic review finds that there is limited validity for the PSI and HAC measures when measured against the reference standard of a medical chart review. Their use, as they currently exist, for public reporting and pay-for-performance, should be publicly reevaluated in light of these findings.
Gaia Data Release 1. Validation of the photometry
NASA Astrophysics Data System (ADS)
Evans, D. W.; Riello, M.; De Angeli, F.; Busso, G.; van Leeuwen, F.; Jordi, C.; Fabricius, C.; Brown, A. G. A.; Carrasco, J. M.; Voss, H.; Weiler, M.; Montegriffo, P.; Cacciari, C.; Burgess, P.; Osborne, P.
2017-04-01
Aims: The photometric validation of the Gaia DR1 release of the ESA Gaia mission is described and the quality of the data shown. Methods: This is carried out via an internal analysis of the photometry using the most constant sources. Comparisons with external photometric catalogues are also made, but are limited by the accuracies and systematics present in these catalogues. An analysis of the quoted errors is also described. Investigations of the calibration coefficients reveal some of the systematic effects that affect the fluxes. Results: The analysis of the constant sources shows that the early-stage photometric calibrations can reach an accuracy as low as 3 mmag.
Wellner, Ulrich F; Klinger, Carsten; Lehmann, Kai; Buhr, Heinz; Neugebauer, Edmund; Keck, Tobias
2017-04-05
Pancreatic resections are among the most complex procedures in visceral surgery. While mortality has decreased substantially over the past decades, morbidity remains high. The volume-outcome correlation in pancreatic surgery is among the strongest in the field of surgery. The German Society for General and Visceral Surgery (DGAV) established a national registry for quality control, risk assessment and outcomes research in pancreatic surgery in Germany (DGAV SuDoQ|Pancreas). Here, we present the aims and scope of the DGAV StuDoQ|Pancreas Registry. A systematic assessment of registry quality is performed based on the recommendations of the German network for outcomes research (DNVF). The registry quality was assessed by consensus criteria of the DNVF in regard to the domains Systematics and Appropriateness, Standardization, Validity of the sampling procedure, Validity of data collection, Validity of statistical analysis and reports, and General demands for registry quality. In summary, DGAV StuDoQ|Pancreas meets most of the criteria of a high-quality clinical registry. The DGAV StuDoQ|Pancreas provides a valuable platform for quality assessment, outcomes research as well as randomized registry trials in pancreatic surgery.
Schütte, Judith; Wang, Huange; Antoniou, Stella; Jarratt, Andrew; Wilson, Nicola K; Riepsaame, Joey; Calero-Nieto, Fernando J; Moignard, Victoria; Basilico, Silvia; Kinston, Sarah J; Hannah, Rebecca L; Chan, Mun Chiang; Nürnberg, Sylvia T; Ouwehand, Willem H; Bonzanni, Nicola; de Bruijn, Marella FTR; Göttgens, Berthold
2016-01-01
Transcription factor (TF) networks determine cell-type identity by establishing and maintaining lineage-specific expression profiles, yet reconstruction of mammalian regulatory network models has been hampered by a lack of comprehensive functional validation of regulatory interactions. Here, we report comprehensive ChIP-Seq, transgenic and reporter gene experimental data that have allowed us to construct an experimentally validated regulatory network model for haematopoietic stem/progenitor cells (HSPCs). Model simulation coupled with subsequent experimental validation using single cell expression profiling revealed potential mechanisms for cell state stabilisation, and also how a leukaemogenic TF fusion protein perturbs key HSPC regulators. The approach presented here should help to improve our understanding of both normal physiological and disease processes. DOI: http://dx.doi.org/10.7554/eLife.11469.001 PMID:26901438
Abbott, Laurie S; Elliott, Lynn T
2017-01-01
The purpose of this systematic literature review was to synthesize the results of transdisciplinary interventions designed with a home visit component in experimental and quasi-experimental studies having representative samples of racial and ethnic minorities. The design of this systematic review was adapted to include both experimental and quasi-experimental quantitative studies. The predetermined inclusion criteria were studies (a) having an experimental or quasi-experimental quantitative design, (b) having a home visit as a research component, (c) including a prevention research intervention strategy targeting health and/or safety issues, (d) conducted in the United States, (e) having representation (at least 30% in the total sample size) of one or more racial/ethnic minority, (f) available in full text, and (g) published in a peer-reviewed journal between January, 2005 and December, 2015. Thirty-nine articles were included in the review. There were 20 primary prevention, 5 secondary prevention, and 14 tertiary prevention intervention studies. Community and home visitation interventions by nurses can provide an effective means for mitigating social determinants of health by empowering people at risk for health disparities to avoid injury, maintain health, and prevent and manage existing disease. © 2016 Wiley Periodicals, Inc.
VAVUQ, Python and Matlab freeware for Verification and Validation, Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Courtney, J. E.; Zamani, K.; Bombardelli, F. A.; Fleenor, W. E.
2015-12-01
A package of scripts is presented for automated Verification and Validation (V&V) and Uncertainty Quantification (UQ) for engineering codes that approximate Partial Differential Equations (PDFs). The code post-processes model results to produce V&V and UQ information. This information can be used to assess model performance. Automated information on code performance can allow for a systematic methodology to assess the quality of model approximations. The software implements common and accepted code verification schemes. The software uses the Method of Manufactured Solutions (MMS), the Method of Exact Solution (MES), Cross-Code Verification, and Richardson Extrapolation (RE) for solution (calculation) verification. It also includes common statistical measures that can be used for model skill assessment. Complete RE can be conducted for complex geometries by implementing high-order non-oscillating numerical interpolation schemes within the software. Model approximation uncertainty is quantified by calculating lower and upper bounds of numerical error from the RE results. The software is also able to calculate the Grid Convergence Index (GCI), and to handle adaptive meshes and models that implement mixed order schemes. Four examples are provided to demonstrate the use of the software for code and solution verification, model validation and uncertainty quantification. The software is used for code verification of a mixed-order compact difference heat transport solver; the solution verification of a 2D shallow-water-wave solver for tidal flow modeling in estuaries; the model validation of a two-phase flow computation in a hydraulic jump compared to experimental data; and numerical uncertainty quantification for 3D CFD modeling of the flow patterns in a Gust erosion chamber.
Png, Kelly; Kwan, Yu Heng; Leung, Ying Ying; Phang, Jie Kie; Lau, Jia Qi; Lim, Ka Keat; Chew, Eng Hui; Low, Lian Leng; Tan, Chuen Seng; Thumboo, Julian; Fong, Warren; Østbye, Truls
2018-03-21
This systematic review aimed to identify studies investigating measurement properties of patient reported outcome measures (PROMs) for spondyloarthritis (SpA), and to evaluate their methodological quality and level of evidence relating to the measurement properties of PROMs. This systematic review was guided by the preferred reporting items for systematic review and meta-analysis (PRISMA). Articles published before 30 June 2017 were retrieved from PubMed ® , Embase ® , and PsychINFO ® (Ovid). Methodological quality and level of evidence were evaluated according to recommendations from the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN). We identified 60 unique PROMs from 125 studies in 39 countries. Twenty-one PROMs were validated for two or more SpA subtypes. The literature examined hypothesis testing (82.4%) most frequently followed by reliability (60.0%). A percentage of 77.7% and 42.7% of studies that assessed PROMs for hypothesis testing and reliability, respectively had "fair" or better methodological quality. Among the PROMs identified, 41.7% were studied in ankylosing spondylitis (AS) only and 23.3% were studied in psoriatic arthritis (PsA) only. The more extensively assessed PROMs included the ankylosing spondylitis quality of life (ASQoL) and bath ankylosing spondylitis functional index (BASFI) for ankylosing spondylitis, and the psoriatic arthritis quality of life questionnaire (VITACORA-19) for psoriatic arthritis. This study identified 60 unique PROMs through a systematic review and synthesized evidence of the measurement properties of the PROMs. There is a lack of validation of PROMs for use across SpA subtypes. Future studies may consider validating PROMs for use across different SpA subtypes. Copyright © 2018 Elsevier Inc. All rights reserved.
Pichler, Martin; Stiegelbauer, Verena; Vychytilova-Faltejskova, Petra; Ivan, Cristina; Ling, Hui; Winter, Elke; Zhang, Xinna; Goblirsch, Matthew; Wulf-Goldenberg, Annika; Ohtsuka, Masahisa; Haybaeck, Johannes; Svoboda, Marek; Okugawa, Yoshinaga; Gerger, Armin; Hoefler, Gerald; Goel, Ajay; Slaby, Ondrej; Calin, George Adrian
2017-03-01
Purpose: Characterization of colorectal cancer transcriptome by high-throughput techniques has enabled the discovery of several differentially expressed genes involving previously unreported miRNA abnormalities. Here, we followed a systematic approach on a global scale to identify miRNAs as clinical outcome predictors and further validated them in the clinical and experimental setting. Experimental Design: Genome-wide miRNA sequencing data of 228 colorectal cancer patients from The Cancer Genome Atlas dataset were analyzed as a screening cohort to identify miRNAs significantly associated with survival according to stringent prespecified criteria. A panel of six miRNAs was further validated for their prognostic utility in a large independent validation cohort ( n = 332). In situ hybridization and functional experiments in a panel of colorectal cancer cell lines and xenografts further clarified the role of clinical relevant miRNAs. Results: Six miRNAs (miR-92b-3p, miR-188-3p, miR-221-5p, miR-331-3p, miR-425-3p, and miR-497-5p) were identified as strong predictors of survival in the screening cohort. High miR-188-3p expression proves to be an independent prognostic factor [screening cohort: HR = 4.137; 95% confidence interval (CI), 1.568-10.917; P = 0.004; validation cohort: HR = 1.538; 95% CI, 1.107-2.137; P = 0.010, respectively]. Forced miR-188-3p expression increased migratory behavior of colorectal cancer cells in vitro and metastases formation in vivo ( P < 0.05). The promigratory role of miR-188-3p is mediated by direct interaction with MLLT4, a novel identified player involved in colorectal cancer cell migration. Conclusions: miR-188-3p is a novel independent prognostic factor in colorectal cancer patients, which can be partly explained by its effect on MLLT4 expression and migration of cancer cells. Clin Cancer Res; 23(5); 1323-33. ©2016 AACR . ©2016 American Association for Cancer Research.
Modeling and characterization of multipath in global navigation satellite system ranging signals
NASA Astrophysics Data System (ADS)
Weiss, Jan Peter
The Global Positioning System (GPS) provides position, velocity, and time information to users in anywhere near the earth in real-time and regardless of weather conditions. Since the system became operational, improvements in many areas have reduced systematic errors affecting GPS measurements such that multipath, defined as any signal taking a path other than the direct, has become a significant, if not dominant, error source for many applications. This dissertation utilizes several approaches to characterize and model multipath errors in GPS measurements. Multipath errors in GPS ranging signals are characterized for several receiver systems and environments. Experimental P(Y) code multipath data are analyzed for ground stations with multipath levels ranging from minimal to severe, a C-12 turboprop, an F-18 jet, and an aircraft carrier. Comparisons between receivers utilizing single patch antennas and multi-element arrays are also made. In general, the results show significant reductions in multipath with antenna array processing, although large errors can occur even with this kind of equipment. Analysis of airborne platform multipath shows that the errors tend to be small in magnitude because the size of the aircraft limits the geometric delay of multipath signals, and high in frequency because aircraft dynamics cause rapid variations in geometric delay. A comprehensive multipath model is developed and validated. The model integrates 3D structure models, satellite ephemerides, electromagnetic ray-tracing algorithms, and detailed antenna and receiver models to predict multipath errors. Validation is performed by comparing experimental and simulated multipath via overall error statistics, per satellite time histories, and frequency content analysis. The validation environments include two urban buildings, an F-18, an aircraft carrier, and a rural area where terrain multipath dominates. The validated models are used to identify multipath sources, characterize signal properties, evaluate additional antenna and receiver tracking configurations, and estimate the reflection coefficients of multipath-producing surfaces. Dynamic models for an F-18 landing on an aircraft carrier correlate aircraft dynamics to multipath frequency content; the model also characterizes the separate contributions of multipath due to the aircraft, ship, and ocean to the overall error statistics. Finally, reflection coefficients for multipath produced by terrain are estimated via a least-squares algorithm.
Living systematic review: 1. Introduction-the why, what, when, and how.
Elliott, Julian H; Synnot, Anneliese; Turner, Tari; Simmonds, Mark; Akl, Elie A; McDonald, Steve; Salanti, Georgia; Meerpohl, Joerg; MacLehose, Harriet; Hilton, John; Tovey, David; Shemilt, Ian; Thomas, James
2017-11-01
Systematic reviews are difficult to keep up to date, but failure to do so leads to a decay in review currency, accuracy, and utility. We are developing a novel approach to systematic review updating termed "Living systematic review" (LSR): systematic reviews that are continually updated, incorporating relevant new evidence as it becomes available. LSRs may be particularly important in fields where research evidence is emerging rapidly, current evidence is uncertain, and new research may change policy or practice decisions. We hypothesize that a continual approach to updating will achieve greater currency and validity, and increase the benefits to end users, with feasible resource requirements over time. Copyright © 2017 Elsevier Inc. All rights reserved.
Clinical audit project in undergraduate medical education curriculum: an assessment validation study
Steketee, Carole; Mak, Donna
2016-01-01
Objectives To evaluate the merit of the Clinical Audit Project (CAP) in an assessment program for undergraduate medical education using a systematic assessment validation framework. Methods A cross-sectional assessment validation study at one medical school in Western Australia, with retrospective qualitative analysis of the design, development, implementation and outcomes of the CAP, and quantitative analysis of assessment data from four cohorts of medical students (2011- 2014). Results The CAP is fit for purpose with clear external and internal alignment to expected medical graduate outcomes. Substantive validity in students’ and examiners’ response processes is ensured through relevant methodological and cognitive processes. Multiple validity features are built-in to the design, planning and implementation process of the CAP. There is evidence of high internal consistency reliability of CAP scores (Cronbach’s alpha > 0.8) and inter-examiner consistency reliability (intra-class correlation>0.7). Aggregation of CAP scores is psychometrically sound, with high internal consistency indicating one common underlying construct. Significant but moderate correlations between CAP scores and scores from other assessment modalities indicate validity of extrapolation and alignment between the CAP and the overall target outcomes of medical graduates. Standard setting, score equating and fair decision rules justify consequential validity of CAP scores interpretation and use. Conclusions This study provides evidence demonstrating that the CAP is a meaningful and valid component in the assessment program. This systematic framework of validation can be adopted for all levels of assessment in medical education, from individual assessment modality, to the validation of an assessment program as a whole. PMID:27716612
Tor, Elina; Steketee, Carole; Mak, Donna
2016-09-24
To evaluate the merit of the Clinical Audit Project (CAP) in an assessment program for undergraduate medical education using a systematic assessment validation framework. A cross-sectional assessment validation study at one medical school in Western Australia, with retrospective qualitative analysis of the design, development, implementation and outcomes of the CAP, and quantitative analysis of assessment data from four cohorts of medical students (2011- 2014). The CAP is fit for purpose with clear external and internal alignment to expected medical graduate outcomes. Substantive validity in students' and examiners' response processes is ensured through relevant methodological and cognitive processes. Multiple validity features are built-in to the design, planning and implementation process of the CAP. There is evidence of high internal consistency reliability of CAP scores (Cronbach's alpha > 0.8) and inter-examiner consistency reliability (intra-class correlation>0.7). Aggregation of CAP scores is psychometrically sound, with high internal consistency indicating one common underlying construct. Significant but moderate correlations between CAP scores and scores from other assessment modalities indicate validity of extrapolation and alignment between the CAP and the overall target outcomes of medical graduates. Standard setting, score equating and fair decision rules justify consequential validity of CAP scores interpretation and use. This study provides evidence demonstrating that the CAP is a meaningful and valid component in the assessment program. This systematic framework of validation can be adopted for all levels of assessment in medical education, from individual assessment modality, to the validation of an assessment program as a whole.
Journalism Degree Motivations: The Development of a Scale
ERIC Educational Resources Information Center
Carpenter, Serena; Grant, August E.; Hoag, Anne
2016-01-01
Scientific knowledge should reflect valid, consistent measurement. It is argued research on scale development needs to be more systematic and prevalent. The intent of this article is to address scale development by creating and validating a construct that measures the underlying reasons why undergraduate students seek a degree in journalism, the…
Discriminantly Valid Personality Measures: Some Propositions. Research Bulletin No. 339.
ERIC Educational Resources Information Center
Jackson, Douglas N.
Starting with the premise that the construct-oriented approach is the only viable approach to personality assessment, this paper considers five propositions. First, a prerequisite to generalizable and valid psychometric measurement of personality rests on the choice of broad-based constructs with systematic univocal definitions. Next, measures…
Klugarova, Jitka; Klugar, Miloslav; Mareckova, Jana; Gallo, Jiri; Kelnarova, Zuzana
2016-01-01
Total hip replacement is the most effective and safest method for treating severe degenerative, traumatic and other diseases of the hip joint. Total hip replacement can reliably relieve pain and improve function in the majority of patients for a period of 15 to 20 years or more postoperatively. Physical therapy follows each total hip replacement surgery. Physical therapy protocols after total hip replacement in the post-discharge period vary widely in terms of setting (inpatient, outpatient), content (the particular set of exercises used), and frequency (e.g. daily versus twice a week). In current literature, there is no systematic review which has compared the effectiveness of inpatient and outpatient physical therapy in patients after total hip replacement in the post-discharge period. The objective of this systematic review was to compare the effectiveness of inpatient physical therapy with outpatient physical therapy on the quality of life and gait measures in older adults after total hip replacement in the post-discharge period. This review considered studies that include older adults (over 65 years) who have had total hip replacement and are in the post-discharge period. Adults with bilateral or multiple simultaneous surgeries and also patients who have had hemiarthroplasty of the hip joint were excluded.This review considered studies that included any type of physical therapy delivered in inpatient settings provided by professionals with education in physical therapy. Inpatient physical therapy delivered at any frequency and over any duration was included.This review considered studies that included as a comparator any type of physical therapy delivered in outpatient settings provided by professionals with education in physical therapy or no physical therapy.This review considered studies that included the following primary and secondary outcomes. The primary outcome was quality of life, assessed by any validated assessment tool. The secondary outcome was measures of gait assessed by any valid methods.This review considered both experimental and observational study designs including randomized controlled trials, non-randomized controlled trials, quasi-experimental, before and after studies, prospective and retrospective cohort studies, case control studies and analytical cross sectional studies for inclusion. The search strategy aimed to find both published and unpublished studies. A three-step search strategy was utilized in 12 databases. Studies published in all languages and any date were considered for inclusion in this review. Assessment of methodological quality was not conducted as no studies were identified that met the inclusion criteria. Data extraction and synthesis was not performed because no studies were included in this systematic review. During to the three-step search strategy 4330 papers were identified. The primary and secondary reviewer independently retrieved 42 potentially relevant papers according to the inclusion criteria by title and abstract screening. Following assessment of full text all of the retrieved papers were excluded based on the inclusion criteria. There is no scientific evidence comparing the effectiveness of inpatient physical therapy with outpatient physical therapy in older patients after total hip replacement in the post-discharge period. This systematic review has identified gaps in the literature for comparing the effectiveness of inpatient physical therapy with and outpatient physical therapy on the quality of life and gait measures in older adults after total hip replacement in the post-discharge period. Prospective randomized double blind multicenter controlled trials are needed to answer this important clinical question.
Toward a systematic exploration of nano-bio interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Xue; Liu, Fang; Liu, Yin
Many studies of nanomaterials make non-systematic alterations of nanoparticle physicochemical properties. Given the immense size of the property space for nanomaterials, such approaches are not very useful in elucidating fundamental relationships between inherent physicochemical properties of these materials and their interactions with, and effects on, biological systems. Data driven artificial intelligence methods such as machine learning algorithms have proven highly effective in generating models with good predictivity and some degree of interpretability. They can provide a viable method of reducing or eliminating animal testing. However, careful experimental design with the modelling of the results in mind is a proven andmore » efficient way of exploring large materials spaces. This approach, coupled with high speed automated experimental synthesis and characterization technologies now appearing, is the fastest route to developing models that regulatory bodies may find useful. We advocate greatly increased focus on systematic modification of physicochemical properties of nanoparticles combined with comprehensive biological evaluation and computational analysis. This is essential to obtain better mechanistic understanding of nano-bio interactions, and to derive quantitatively predictive and robust models for the properties of nanomaterials that have useful domains of applicability. - Highlights: • Nanomaterials studies make non-systematic alterations to nanoparticle properties. • Vast nanomaterials property spaces require systematic studies of nano-bio interactions. • Experimental design and modelling are efficient ways of exploring materials spaces. • We advocate systematic modification and computational analysis to probe nano-bio interactions.« less
Xu, H; Li, C; Zeng, Q; Agrawal, I; Zhu, X; Gong, Z
2016-06-01
In this study, to systematically identify the most stably expressed genes for internal reference in zebrafish Danio rerio investigations, 37 D. rerio transcriptomic datasets (both RNA sequencing and microarray data) were collected from gene expression omnibus (GEO) database and unpublished data, and gene expression variations were analysed under three experimental conditions: tissue types, developmental stages and chemical treatments. Forty-four putative candidate genes were identified with the c.v. <0·2 from all datasets. Following clustering into different functional groups, 21 genes, in addition to four conventional housekeeping genes (eef1a1l1, b2m, hrpt1l and actb1), were selected from different functional groups for further quantitative real-time (qrt-)PCR validation using 25 RNA samples from different adult tissues, developmental stages and chemical treatments. The qrt-PCR data were then analysed using the statistical algorithm refFinder for gene expression stability. Several new candidate genes showed better expression stability than the conventional housekeeping genes in all three categories. It was found that sep15 and metap1 were the top two stable genes for tissue types, ube2a and tmem50a the top two for different developmental stages, and rpl13a and rp1p0 the top two for chemical treatments. Thus, based on the extensive transcriptomic analyses and qrt-PCR validation, these new reference genes are recommended for normalization of D. rerio qrt-PCR data respectively for the three different experimental conditions. © 2016 The Fisheries Society of the British Isles.
[Information system for supporting the Nursing Care Systematization].
Malucelli, Andreia; Otemaier, Kelly Rafaela; Bonnet, Marcel; Cubas, Marcia Regina; Garcia, Telma Ribeiro
2010-01-01
It is an unquestionable fact, the importance, relevance and necessity of implementing the Nursing Care Systematization in the different environments of professional practice. Considering it as a principle, emerged the motivation for the development of an information system to support the Nursing Care Systematization, based on Nursing Process steps and Human Needs, using the diagnoses language, nursing interventions and outcomes for professional practice documentation. This paper describes the methodological steps and results of the information system development - requirements elicitation, modeling, object-relational mapping, implementation and system validation.
Fernandez-Hermida, Jose Ramon; Calafat, Amador; Becoña, Elisardo; Tsertsvadze, Alexander; Foxcroft, David R
2012-09-01
To assess external validity characteristics of studies from two Cochrane Systematic Reviews of the effectiveness of universal family-based prevention of alcohol misuse in young people. Two reviewers used an a priori developed external validity rating form and independently assessed three external validity dimensions of generalizability, applicability and predictability (GAP) in randomized controlled trials. The majority (69%) of the included 29 studies were rated 'unclear' on the reporting of sufficient information for judging generalizability from sample to study population. Ten studies (35%) were rated 'unclear' on the reporting of sufficient information for judging applicability to other populations and settings. No study provided an assessment of the validity of the trial end-point measures for subsequent mortality, morbidity, quality of life or other economic or social outcomes. Similarly, no study reported on the validity of surrogate measures using established criteria for assessing surrogate end-points. Studies evaluating the benefits of family-based prevention of alcohol misuse in young people are generally inadequate at reporting information relevant to generalizability of the findings or implications for health or social outcomes. Researchers, study authors, peer reviewers, journal editors and scientific societies should take steps to improve the reporting of information relevant to external validity in prevention trials. © 2012 The Authors. Addiction © 2012 Society for the Study of Addiction.
CFD Validation Experiment of a Mach 2.5 Axisymmetric Shock-Wave/Boundary-Layer Interaction
NASA Technical Reports Server (NTRS)
Davis, David O.
2015-01-01
Experimental investigations of specific flow phenomena, e.g., Shock Wave Boundary-Layer Interactions (SWBLI), provide great insight to the flow behavior but often lack the necessary details to be useful as CFD validation experiments. Reasons include: 1.Undefined boundary conditions Inconsistent results 2.Undocumented 3D effects (CL only measurements) 3.Lack of uncertainty analysis While there are a number of good subsonic experimental investigations that are sufficiently documented to be considered test cases for CFD and turbulence model validation, the number of supersonic and hypersonic cases is much less. This was highlighted by Settles and Dodsons [1] comprehensive review of available supersonic and hypersonic experimental studies. In all, several hundred studies were considered for their database.Of these, over a hundred were subjected to rigorous acceptance criteria. Based on their criteria, only 19 (12 supersonic, 7 hypersonic) were considered of sufficient quality to be used for validation purposes. Aeschliman and Oberkampf [2] recognized the need to develop a specific methodology for experimental studies intended specifically for validation purposes.
Goal setting as an outcome measure: A systematic review.
Hurn, Jane; Kneebone, Ian; Cropley, Mark
2006-09-01
Goal achievement has been considered to be an important measure of outcome by clinicians working with patients in physical and neurological rehabilitation settings. This systematic review was undertaken to examine the reliability, validity and sensitivity of goal setting and goal attainment scaling approaches when used with working age and older people. To review the reliability, validity and sensitivity of both goal setting and goal attainment scaling when employed as an outcome measure within a physical and neurological working age and older person rehabilitation environment, by examining the research literature covering the 36 years since goal-setting theory was proposed. Data sources included a computer-aided literature search of published studies examining the reliability, validity and sensitivity of goal setting/goal attainment scaling, with further references sourced from articles obtained through this process. There is strong evidence for the reliability, validity and sensitivity of goal attainment scaling. Empirical support was found for the validity of goal setting but research demonstrating its reliability and sensitivity is limited. Goal attainment scaling appears to be a sound measure for use in physical rehabilitation settings with working age and older people. Further work needs to be carried out with goal setting to establish its reliability and sensitivity as a measurement tool.
Pinto, Filipe; Pacheco, Catarina C.; Oliveira, Paulo; Montagud, Arnau; Landels, Andrew; Couto, Narciso; Wright, Phillip C.; Urchueguía, Javier F.; Tamagnini, Paula
2015-01-01
The use of microorganisms as cell factories frequently requires extensive molecular manipulation. Therefore, the identification of genomic neutral sites for the stable integration of ectopic DNA is required to ensure a successful outcome. Here we describe the genome mapping and validation of five neutral sites in the chromosome of Synechocystis sp. PCC 6803, foreseeing the use of this cyanobacterium as a photoautotrophic chassis. To evaluate the neutrality of these loci, insertion/deletion mutants were produced, and to assess their functionality, a synthetic green fluorescent reporter module was introduced. The constructed integrative vectors include a BioBrick-compatible multiple cloning site insulated by transcription terminators, constituting robust cloning interfaces for synthetic biology approaches. Moreover, Synechocystis mutants (chassis) ready to receive purpose-built synthetic modules/circuits are also available. This work presents a systematic approach to map and validate chromosomal neutral sites in cyanobacteria, and that can be extended to other organisms. PMID:26490728
Winterfeld, Katrin; Quera, Vicenç; Winterfeld, Tobias; Ganss, Carolina
2018-01-01
Systematics is considered important for effective toothbrushing. A theoretical concept of systematics in toothbrushing and a validated index to quantify it using observational data is suggested. The index consists of three components: completeness (all areas of the dentition reached), isochronicity (all areas brushed equally long) and consistency (avoiding frequent alternations between areas). Toothbrushing should take a sufficient length of time; therefore, this parameter is part of the index value calculation. Quantitative data from video observations were used including the number of changes between areas, number of areas reached, absolute brushing time and brushing time per area. These data were fed into two algorithms that converted the behaviour into two index values (each with values between 0 and 1) and were summed as the Toothbrushing Systematics Index (TSI) value; 0 indicates completely unsystematic and 2 indicates perfectly systematic brushing. The index was developed using theoretical data. The data matrices revealed the highest values when all areas are reached and brushed equally long. Few changes occurred between the areas when the brushing duration was ≥90 s; the lowest values occurred under opposite conditions. Clinical applicability was tested with data from re-analysed videos from an earlier intervention study aiming to establish a pre-defined toothbrushing sequence. Subjects who fully adopted this sequence had a baseline TSI of 1.30±0.26, which increased to 1.74±0.09 after the intervention (p≤0.001). When the participants who only partially adopted the sequence were included, the respective values were 1.25±0.27 and 1.69±0.14 (p≤0.001). The suggested new TS-index can cover a variety of clinically meaningful variations of systematic brushing, validly quantifies the changes in toothbrushing systematics and has discriminative power. PMID:29708989
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-31
... factors as the approved models, are validated by experimental test data, and receive the Administrator's... stage of the MEP involves applying the model against a database of experimental test cases including..., particularly the requirement for validation by experimental test data. That guidance is based on the MEP's...
Validation of the Systematic Screening for Behavior Disorders in Middle and Junior High School
ERIC Educational Resources Information Center
Caldarella, Paul; Young, Ellie L.; Richardson, Michael J.; Young, Benjamin J.; Young, K. Richard
2008-01-01
The Systematic Screening for Behavior Disorders (SSBD), a multistage screening system designed to identify elementary school--age children at risk for emotional and behavioral disorders, was evaluated for use with middle and junior high school students. During SSBD Stage 1, teachers identified 123 students in grades 6 through 9 with…
ERIC Educational Resources Information Center
Francis, David O.; Daniero, James J.; Hovis, Kristen L.; Sathe, Nila; Jacobson, Barbara; Penson, David F.; Feurer, Irene D.; McPheeters, Melissa L.
2017-01-01
Purpose: The purpose of this study was to perform a comprehensive systematic review of the literature on voice-related patient-reported outcome (PRO) measures in adults and to evaluate each instrument for the presence of important measurement properties. Method: MEDLINE, the Cumulative Index of Nursing and Allied Health Literature, and the Health…
ERIC Educational Resources Information Center
Clanchy, Kelly M.; Tweedy, Sean M.; Boyd, Roslyn
2011-01-01
Aim: This systematic review compares the validity, reliability, and clinical use of habitual physical activity (HPA) performance measures in adolescents with cerebral palsy (CP). Method: Measures of HPA across Gross Motor Function Classification System (GMFCS) levels I-V for adolescents (10-18y) with CP were included if at least 60% of items…
This report is a description of field work and data analysis results comparing a design comparable to systematic site selection with one based on random selection of sites. The report is expected to validate the use of random site selection in the bioassessment program for the O...
Evaluating the Validity of Systematic Reviews to Identify Empirically Supported Treatments
ERIC Educational Resources Information Center
Slocum, Timothy A.; Detrich, Ronnie; Spencer, Trina D.
2012-01-01
The "best available evidence" is one of the three basic inputs into evidence-based practice. This paper sets out a framework for evaluating the quality of systematic reviews that are intended to identify empirically supported interventions as a way of summarizing the best available evidence. The premise of this paper is that the process of…
The State of the Art in Self-Study of Teacher Education Practices: A Systematic Literature Review
ERIC Educational Resources Information Center
Vanassche, Eline; Kelchtermans, Geert
2015-01-01
This article reports on a systematic review of the Self-Study of Teacher Education Practices research literature published between 1990 and 2012. Self-study research refers to teacher educators researching their practice with the purpose of improving it, making explicit and validating their professional expertise and, at the same time,…
Dimitrov, Borislav D; Motterlini, Nicola; Fahey, Tom
2015-01-01
Objective Estimating calibration performance of clinical prediction rules (CPRs) in systematic reviews of validation studies is not possible when predicted values are neither published nor accessible or sufficient or no individual participant or patient data are available. Our aims were to describe a simplified approach for outcomes prediction and calibration assessment and evaluate its functionality and validity. Study design and methods: Methodological study of systematic reviews of validation studies of CPRs: a) ABCD2 rule for prediction of 7 day stroke; and b) CRB-65 rule for prediction of 30 day mortality. Predicted outcomes in a sample validation study were computed by CPR distribution patterns (“derivation model”). As confirmation, a logistic regression model (with derivation study coefficients) was applied to CPR-based dummy variables in the validation study. Meta-analysis of validation studies provided pooled estimates of “predicted:observed” risk ratios (RRs), 95% confidence intervals (CIs), and indexes of heterogeneity (I2) on forest plots (fixed and random effects models), with and without adjustment of intercepts. The above approach was also applied to the CRB-65 rule. Results Our simplified method, applied to ABCD2 rule in three risk strata (low, 0–3; intermediate, 4–5; high, 6–7 points), indicated that predictions are identical to those computed by univariate, CPR-based logistic regression model. Discrimination was good (c-statistics =0.61–0.82), however, calibration in some studies was low. In such cases with miscalibration, the under-prediction (RRs =0.73–0.91, 95% CIs 0.41–1.48) could be further corrected by intercept adjustment to account for incidence differences. An improvement of both heterogeneities and P-values (Hosmer-Lemeshow goodness-of-fit test) was observed. Better calibration and improved pooled RRs (0.90–1.06), with narrower 95% CIs (0.57–1.41) were achieved. Conclusion Our results have an immediate clinical implication in situations when predicted outcomes in CPR validation studies are lacking or deficient by describing how such predictions can be obtained by everyone using the derivation study alone, without any need for highly specialized knowledge or sophisticated statistics. PMID:25931829
Chaabene, Helmi; Negra, Yassine; Bouguezzi, Raja; Capranica, Laura; Franchini, Emerson; Prieske, Olaf; Hbacha, Hamdi; Granacher, Urs
2018-01-01
The regular monitoring of physical fitness and sport-specific performance is important in elite sports to increase the likelihood of success in competition. This study aimed to systematically review and to critically appraise the methodological quality, validation data, and feasibility of the sport-specific performance assessment in Olympic combat sports like amateur boxing, fencing, judo, karate, taekwondo, and wrestling. A systematic search was conducted in the electronic databases PubMed, Google-Scholar, and Science-Direct up to October 2017. Studies in combat sports were included that reported validation data (e.g., reliability, validity, sensitivity) of sport-specific tests. Overall, 39 studies were eligible for inclusion in this review. The majority of studies (74%) contained sample sizes <30 subjects. Nearly, 1/3 of the reviewed studies lacked a sufficient description (e.g., anthropometrics, age, expertise level) of the included participants. Seventy-two percent of studies did not sufficiently report inclusion/exclusion criteria of their participants. In 62% of the included studies, the description and/or inclusion of a familiarization session (s) was either incomplete or not existent. Sixty-percent of studies did not report any details about the stability of testing conditions. Approximately half of the studies examined reliability measures of the included sport-specific tests (intraclass correlation coefficient [ICC] = 0.43-1.00). Content validity was addressed in all included studies, criterion validity (only the concurrent aspect of it) in approximately half of the studies with correlation coefficients ranging from r = -0.41 to 0.90. Construct validity was reported in 31% of the included studies and predictive validity in only one. Test sensitivity was addressed in 13% of the included studies. The majority of studies (64%) ignored and/or provided incomplete information on test feasibility and methodological limitations of the sport-specific test. In 28% of the included studies, insufficient information or a complete lack of information was provided in the respective field of the test application. Several methodological gaps exist in studies that used sport-specific performance tests in Olympic combat sports. Additional research should adopt more rigorous validation procedures in the application and description of sport-specific performance tests in Olympic combat sports.
Jee, Sandra H; Halterman, Jill S; Szilagyi, Moira; Conn, Anne-Marie; Alpert-Gillis, Linda; Szilagyi, Peter G
2011-01-01
To determine whether systematic use of a validated social-emotional screening instrument in a primary care setting is feasible and improves detection of social-emotional problems among youth in foster care. Before-and-after study design, following a practice intervention to screen all youth in foster care for psychosocial problems using the Strengths and Difficulties Questionnaire (SDQ), a validated instrument with 5 subdomains. After implementation of systematic screening, youth aged 11 to 17 years and their foster parents completed the SDQ at routine health maintenance visits. We assessed feasibility of screening by measuring the completion rates of SDQ by youth and foster parents. We compared the detection of psychosocial problems during a 2-year period before systematic screening to the detection after implementation of systematic screening with the SDQ. We used chart reviews to assess detection at baseline and after implementing systematic screening. Altogether, 92% of 212 youth with routine visits that occurred after initiation of screening had a completed SDQ in the medical record, demonstrating high feasibility of systematic screening. Detection of a potential mental health problem was higher in the screening period than baseline period for the entire population (54% vs 27%, P < .001). More than one-fourth of youth had 2 or more significant social-emotional problem domains on the SDQ. Systematic screening for potential social-emotional problems among youth in foster care was feasible within a primary care setting and doubled the detection rate of potential psychosocial problems. Copyright © 2011 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Patterson, Stephanie Y.; Smith, Veronica; Jelen, Michaela
2010-01-01
Aim: The purpose of this systematic review was to examine the quality of conduct of experimental studies contributing to our empirical understanding of function-based behavioural interventions for stereotypic and repetitive behaviours (SRBs) in individuals with autism spectrum disorders (ASDs). Method: Systematic review methodology was used to…
Rockers, Peter C; Tugwell, Peter; Grimshaw, Jeremy; Oliver, Sandy; Atun, Rifat; Røttingen, John-Arne; Fretheim, Atle; Ranson, M Kent; Daniels, Karen; Luiza, Vera Lucia; Bärnighausen, Till
2017-09-01
Evidence from quasi-experimental studies is often excluded from systematic reviews of health systems research despite the fact that such studies can provide strong causal evidence when well conducted. This article discusses global coordination of efforts to institutionalize the inclusion of causal evidence from quasi-experiments in systematic reviews of health systems research. In particular, we are concerned with identifying opportunities for strengthening capacity at the global and local level for implementing protocols necessary to ensure that reviews that include quasi-experiments are consistently of the highest quality. We first describe the current state of the global infrastructure that facilitates the production of systematic reviews of health systems research. We identify five important types of actors operating within this infrastructure: review authors; synthesis collaborations that facilitate the review process; synthesis interest groups that supplement the work of the larger collaborations; review funders; and end users, including policymakers. Then, we examine opportunities for intervening to build the capacity of each type of actors to support the inclusion of quasi-experiments in reviews. Finally, we suggest practical next steps for proceeding with capacity building efforts. Because of the complexity and relative nascence of the field, we recommend a carefully planned and executed approach to strengthening global capacity for the inclusion of quasi-experimental studies in systematic reviews. Copyright © 2017 Elsevier Inc. All rights reserved.
An Observation-Driven Agent-Based Modeling and Analysis Framework for C. elegans Embryogenesis.
Wang, Zi; Ramsey, Benjamin J; Wang, Dali; Wong, Kwai; Li, Husheng; Wang, Eric; Bao, Zhirong
2016-01-01
With cutting-edge live microscopy and image analysis, biologists can now systematically track individual cells in complex tissues and quantify cellular behavior over extended time windows. Computational approaches that utilize the systematic and quantitative data are needed to understand how cells interact in vivo to give rise to the different cell types and 3D morphology of tissues. An agent-based, minimum descriptive modeling and analysis framework is presented in this paper to study C. elegans embryogenesis. The framework is designed to incorporate the large amounts of experimental observations on cellular behavior and reserve data structures/interfaces that allow regulatory mechanisms to be added as more insights are gained. Observed cellular behaviors are organized into lineage identity, timing and direction of cell division, and path of cell movement. The framework also includes global parameters such as the eggshell and a clock. Division and movement behaviors are driven by statistical models of the observations. Data structures/interfaces are reserved for gene list, cell-cell interaction, cell fate and landscape, and other global parameters until the descriptive model is replaced by a regulatory mechanism. This approach provides a framework to handle the ongoing experiments of single-cell analysis of complex tissues where mechanistic insights lag data collection and need to be validated on complex observations.
Bayesian refinement of protein structures and ensembles against SAXS data using molecular dynamics
Shevchuk, Roman; Hub, Jochen S.
2017-01-01
Small-angle X-ray scattering is an increasingly popular technique used to detect protein structures and ensembles in solution. However, the refinement of structures and ensembles against SAXS data is often ambiguous due to the low information content of SAXS data, unknown systematic errors, and unknown scattering contributions from the solvent. We offer a solution to such problems by combining Bayesian inference with all-atom molecular dynamics simulations and explicit-solvent SAXS calculations. The Bayesian formulation correctly weights the SAXS data versus prior physical knowledge, it quantifies the precision or ambiguity of fitted structures and ensembles, and it accounts for unknown systematic errors due to poor buffer matching. The method further provides a probabilistic criterion for identifying the number of states required to explain the SAXS data. The method is validated by refining ensembles of a periplasmic binding protein against calculated SAXS curves. Subsequently, we derive the solution ensembles of the eukaryotic chaperone heat shock protein 90 (Hsp90) against experimental SAXS data. We find that the SAXS data of the apo state of Hsp90 is compatible with a single wide-open conformation, whereas the SAXS data of Hsp90 bound to ATP or to an ATP-analogue strongly suggest heterogenous ensembles of a closed and a wide-open state. PMID:29045407
A Systematic Review of Rural, Theory-based Physical Activity Interventions.
Walsh, Shana M; Meyer, M Renée Umstattd; Gamble, Abigail; Patterson, Megan S; Moore, Justin B
2017-05-01
This systematic review synthesized the scientific literature on theory-based physical activity (PA) interventions in rural populations. PubMed, PsycINFO, and Web of Science databases were searched to identify studies with a rural study sample, PA as a primary outcome, use of a behavioral theory or model, randomized or quasi-experimental research design, and application at the primary and/or secondary level of prevention. Thirty-one studies met our inclusion criteria. The Social Cognitive Theory (N = 14) and Transtheoretical Model (N = 10) were the most frequently identified theories; however, most intervention studies were informed by theory but lacked higher-level theoretical application and testing. Interventions largely took place in schools (N = 10) and with female-only samples (N = 8). Findings demonstrated that theory-based PA interventions are mostly successful at increasing PA in rural populations but require improvement. Future studies should incorporate higher levels of theoretical application, and should explore adapting or developing rural-specific theories. Study designs should employ more rigorous research methods to decrease bias and increase validity of findings. Follow-up assessments to determine behavioral maintenance and/or intervention sustainability are warranted. Finally, funding agencies and journals are encouraged to adopt rural-urban commuting area codes as the standard for defining rural.
A systematic quality review of high-tech AAC interventions as an evidence-based practice.
Morin, Kristi L; Ganz, Jennifer B; Gregori, Emily V; Foster, Margaret J; Gerow, Stephanie L; Genç-Tosun, Derya; Hong, Ee Rea
2018-06-01
Although high-tech augmentative and alternative communication (AAC) is commonly used to teach social-communication skills to people with autism spectrum disorder or intellectual disabilities who have complex communication needs, there is a critical need to evaluate the efficacy of this approach. The aim of this systematic review was to evaluate the quality of single-case experimental design research on the use of high-tech AAC to teach social-communication skills to individuals with autism spectrum disorder or intellectual disabilities who have complex communication needs, to determine if this intervention approach meets the criteria for evidence-based practices as outlined by the What Works Clearinghouse. Additionally, information on the following extended methodological standards is reported on all included studies: participant description, description of setting and materials, interventionist description, baseline and intervention description, maintenance, generalization, procedural integrity, and social validity. The results from 18 multiple-baseline or multiple-probe experiments across 17 studies indicate that using high-tech AAC to teach social-communication skills to individuals with autism spectrum disorder or intellectual disabilities and complex communication needs can be considered an evidence-based practice, although the review of comparison (i.e., alternating treatment) design studies did not indicate that high-tech AAC is significantly better than low-tech AAC.
Experimental investigation of an RNA sequence space
NASA Technical Reports Server (NTRS)
Lee, Youn-Hyung; Dsouza, Lisa; Fox, George E.
1993-01-01
Modern rRNAs are the historic consequence of an ongoing evolutionary exploration of a sequence space. These extant sequences belong to a special subset of the sequence space that is comprised only of those primary sequences that can validly perform the biological function(s) required of the particular RNA. If it were possible to readily identify all such valid sequences, stochastic predictions could be made about the relative likelihood of various evolutionary pathways available to an RNA. Herein an experimental system which can assess whether a particular sequence is likely to have validity as a eubacterial 5S rRNA is described. A total of ten naturally occurring, and hence known to be valid, sequences and two point mutants of unknown validity were used to test the usefulness of the approach. Nine of the ten valid sequences tested positive whereas both mutants tested as clearly defective. The tenth valid sequence gave results that would be interpreted as reflecting a borderline status were the answer not known. These results demonstrate that it is possible to experimentally determine which sequences in local regions of the sequence space are potentially valid 5S rRNAs.
Training and Assessment of Hysteroscopic Skills: A Systematic Review.
Savran, Mona Meral; Sørensen, Stine Maya Dreier; Konge, Lars; Tolsgaard, Martin G; Bjerrum, Flemming
2016-01-01
The aim of this systematic review was to identify studies on hysteroscopic training and assessment. PubMed, Excerpta Medica, the Cochrane Library, and Web of Science were searched in January 2015. Manual screening of references and citation tracking were also performed. Studies on hysteroscopic educational interventions were selected without restrictions on study design, populations, language, or publication year. A qualitative data synthesis including the setting, study participants, training model, training characteristics, hysteroscopic skills, assessment parameters, and study outcomes was performed by 2 authors working independently. Effect sizes were calculated when possible. Overall, 2 raters independently evaluated sources of validity evidence supporting the outcomes of the hysteroscopy assessment tools. A total of 25 studies on hysteroscopy training were identified, of which 23 were performed in simulated settings. Overall, 10 studies used virtual-reality simulators and reported effect sizes for technical skills ranging from 0.31 to 2.65; 12 used inanimate models and reported effect sizes for technical skills ranging from 0.35 to 3.19. One study involved live animal models; 2 studies were performed in clinical settings. The validity evidence supporting the assessment tools used was low. Consensus between the 2 raters on the reported validity evidence was high (94%). This systematic review demonstrated large variations in the effect of different tools for hysteroscopy training. The validity evidence supporting the assessment of hysteroscopic skills was limited. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Translated Versions of Voice Handicap Index (VHI)-30 across Languages: A Systematic Review
SEIFPANAHI, Sadegh; JALAIE, Shohreh; NIKOO, Mohammad Reza; SOBHANI-RAD, Davood
2015-01-01
Background: In this systematic review, the aim is to investigate different VHI-30 versions between languages regarding their validity, reliability and their translation process. Methods: Articles were extracted systematically from some of the prime databases including Cochrane, googlescholar, MEDLINE (via PubMed gate), Sciencedirect, Web of science, and their reference lists by Voice Handicap Index keyword with only title limitation and time of publication (from 1997 to 2014). However the other limitations (e.g. excluding non-English, other versions of VHI ones, and so on) applied manually after studying the papers. In order to appraise the methodology of the papers, three authors did it by 12-item diagnostic test checklist in “Critical Appraisal Skills Programme” or (CASP) site. After applying all of the screenings, the papers that had the study eligibility criteria such as; translation, validity, and reliability processes, included in this review. Results: The remained non-repeated articles were 12 from different languages. All of them reported validity, reliability and translation method, which presented in details in this review. Conclusion: Mainly the preferred method for translation in the gathered papers was “Brislin’s classic back-translation model (1970), although the procedure was not performed completely but it was more prominent than other translation procedures. High test-retest reliability, internal consistency and moderate construct validity between different languages in regards to all 3 VHI-30 domains confirm the applicability of translated VHI-30 version across languages. PMID:26056664
Shea, Beverley J; Hamel, Candyce; Wells, George A; Bouter, Lex M; Kristjansson, Elizabeth; Grimshaw, Jeremy; Henry, David A; Boers, Maarten
2009-10-01
Our purpose was to measure the agreement, reliability, construct validity, and feasibility of a measurement tool to assess systematic reviews (AMSTAR). We randomly selected 30 systematic reviews from a database. Each was assessed by two reviewers using: (1) the enhanced quality assessment questionnaire (Overview of Quality Assessment Questionnaire [OQAQ]); (2) Sacks' instrument; and (3) our newly developed measurement tool (AMSTAR). We report on reliability (interobserver kappas of the 11 AMSTAR items), intraclass correlation coefficients (ICCs) of the sum scores, construct validity (ICCs of the sum scores of AMSTAR compared with those of other instruments), and completion times. The interrater agreement of the individual items of AMSTAR was substantial with a mean kappa of 0.70 (95% confidence interval [CI]: 0.57, 0.83) (range: 0.38-1.0). Kappas recorded for the other instruments were 0.63 (95% CI: 0.38, 0.78) for enhanced OQAQ and 0.40 (95% CI: 0.29, 0.50) for the Sacks' instrument. The ICC of the total score for AMSTAR was 0.84 (95% CI: 0.65, 0.92) compared with 0.91 (95% CI: 0.82, 0.96) for OQAQ and 0.86 (95% CI: 0.71, 0.94) for the Sacks' instrument. AMSTAR proved easy to apply, each review taking about 15 minutes to complete. AMSTAR has good agreement, reliability, construct validity, and feasibility. These findings need confirmation by a broader range of assessors and a more diverse range of reviews.
Validated questionnaires heighten detection of difficult asthma comorbidities.
Radhakrishna, Naghmeh; Tay, Tunn Ren; Hore-Lacy, Fiona; Stirling, Robert; Hoy, Ryan; Dabscheck, Eli; Hew, Mark
2017-04-01
Multiple extra-pulmonary comorbidities contribute to difficult asthma, but their diagnosis can be challenging and time consuming. Previous data on comorbidity detection have focused on clinical assessment, which may miss certain conditions. We aimed to locate relevant validated screening questionnaires to identify extra-pulmonary comorbidities that contribute to difficult asthma, and evaluate their performance during a difficult asthma evaluation. MEDLINE was searched to identify key extra-pulmonary comorbidities that contribute to difficult asthma. Screening questionnaires were chosen based on ease of use, presence of a cut-off score, and adequate validation to help systematically identify comorbidities. In a consecutive series of 86 patients referred for systematic evaluation of difficult asthma, questionnaires were administered prior to clinical consultation. Six difficult asthma comorbidities and corresponding screening questionnaires were found: sinonasal disease (allergic rhinitis and chronic rhinosinusitis), vocal cord dysfunction, dysfunctional breathing, obstructive sleep apnea, anxiety and depression, and gastro-oesophageal reflux disease. When the questionnaires were added to the referring clinician's impression, the detection of all six comorbidities was significantly enhanced. The average time for questionnaire administration was approximately 40 minutes. The use of validated screening questionnaires heightens detection of comorbidities in difficult asthma. The availability of data from a battery of questionnaires prior to consultation can save time and allow clinicians to systematically assess difficult asthma patients and to focus on areas of particular concern. Such an approach would ensure that all contributing comorbidities have been addressed before significant treatment escalation is considered.
Tai, Mitchell; Ly, Amanda; Leung, Inne; Nayar, Gautam
2015-01-01
The burgeoning pipeline for new biologic drugs has increased the need for high-throughput process characterization to efficiently use process development resources. Breakthroughs in highly automated and parallelized upstream process development have led to technologies such as the 250-mL automated mini bioreactor (ambr250™) system. Furthermore, developments in modern design of experiments (DoE) have promoted the use of definitive screening design (DSD) as an efficient method to combine factor screening and characterization. Here we utilize the 24-bioreactor ambr250™ system with 10-factor DSD to demonstrate a systematic experimental workflow to efficiently characterize an Escherichia coli (E. coli) fermentation process for recombinant protein production. The generated process model is further validated by laboratory-scale experiments and shows how the strategy is useful for quality by design (QbD) approaches to control strategies for late-stage characterization. © 2015 American Institute of Chemical Engineers.
Performance optimization of a miniature Joule-Thomson cryocooler using numerical model
NASA Astrophysics Data System (ADS)
Ardhapurkar, P. M.; Atrey, M. D.
2014-09-01
The performance of a miniature Joule-Thomson cryocooler depends on the effectiveness of the heat exchanger. The heat exchanger used in such cryocooler is Hampson-type recuperative heat exchanger. The design of the efficient heat exchanger is crucial for the optimum performance of the cryocooler. In the present work, the heat exchanger is numerically simulated for the steady state conditions and the results are validated against the experimental data available from the literature. The area correction factor is identified for the calculation of effective heat transfer area which takes into account the effect of helical geometry. In order to get an optimum performance of the cryocoolers, operating parameters like mass flow rate, pressure and design parameters like heat exchanger length, helical diameter of coil, fin dimensions, fin density have to be identified. The present work systematically addresses this aspect of design for miniature J-T cryocooler.
Broadband/Wideband Magnetoelectric Response
Park, Chee-Sung; Priya, Shashank
2012-01-01
A broadband/wideband magnetoelectric (ME) composite offers new opportunities for sensing wide ranges of both DC and AC magnetic fields. The broadband/wideband behavior is characterized by flat ME response over a given AC frequency range and DC magnetic bias. The structure proposed in this study operates in the longitudinal-transversal (L-T) mode. In this paper, we provide information on (i) how to design broadband/wideband ME sensors and (ii) how to control the magnitude of ME response over a desired frequency and DC bias regime. A systematic study was conducted to identify the factors affecting the broadband/wideband behavior by developing experimental models andmore » validating them against the predictions made through finite element modeling. A working prototype of the sensor with flat bands for both DC and AC magnetic field conditions was successfully obtained. These results are quite promising for practical applications such as current probe, low-frequency magnetic field sensing, and ME energy harvester.« less
Standards and Methodologies for Characterizing Radiobiological Impact of High-Z Nanoparticles
Subiel, Anna; Ashmore, Reece; Schettino, Giuseppe
2016-01-01
Research on the application of high-Z nanoparticles (NPs) in cancer treatment and diagnosis has recently been the subject of growing interest, with much promise being shown with regards to a potential transition into clinical practice. In spite of numerous publications related to the development and application of nanoparticles for use with ionizing radiation, the literature is lacking coherent and systematic experimental approaches to fully evaluate the radiobiological effectiveness of NPs, validate mechanistic models and allow direct comparison of the studies undertaken by various research groups. The lack of standards and established methodology is commonly recognised as a major obstacle for the transition of innovative research ideas into clinical practice. This review provides a comprehensive overview of radiobiological techniques and quantification methods used in in vitro studies on high-Z nanoparticles and aims to provide recommendations for future standardization for NP-mediated radiation research. PMID:27446499
Nikolov, Svetoslav; Santos, Guido; Wolkenhauer, Olaf; Vera, Julio
2018-02-01
Mathematical modeling of cell differentiated in colonic crypts can contribute to a better understanding of basic mechanisms underlying colonic tissue organization, but also its deregulation during carcinogenesis and tumor progression. Here, we combined bifurcation analysis to assess the effect that time delay has in the complex interplay of stem cells and semi-differentiated cells at the niche of colonic crypts, and systematic model perturbation and simulation to find model-based phenotypes linked to cancer progression. The models suggest that stem cell and semi-differentiated cell population dynamics in colonic crypts can display chaotic behavior. In addition, we found that clinical profiling of colorectal cancer correlates with the in silico phenotypes proposed by the mathematical model. Further, potential therapeutic targets for chemotherapy resistant phenotypes are proposed, which in any case will require experimental validation.
Williams, A Mark; Ericsson, K Anders
2005-06-01
The number of researchers studying perceptual-cognitive expertise in sport is increasing. The intention in this paper is to review the currently accepted framework for studying expert performance and to consider implications for undertaking research work in the area of perceptual-cognitive expertise in sport. The expert performance approach presents a descriptive and inductive approach for the systematic study of expert performance. The nature of expert performance is initially captured in the laboratory using representative tasks that identify reliably superior performance. Process-tracing measures are employed to determine the mechanisms that mediate expert performance on the task. Finally, the specific types of activities that lead to the acquisition and development of these mediating mechanisms are identified. General principles and mechanisms may be discovered and then validated by more traditional experimental designs. The relevance of this approach to the study of perceptual-cognitive expertise in sport is discussed and suggestions for future work highlighted.
Sasse, Sarah K; Gerber, Anthony N
2015-01-01
Nuclear receptors (NRs) are widely targeted to treat a range of human diseases. Feed-forward loops are an ancient mechanism through which single cell organisms organize transcriptional programming and modulate gene expression dynamics, but they have not been systematically studied as a regulatory paradigm for NR-mediated transcriptional responses. Here, we provide an overview of the basic properties of feed-forward loops as predicted by mathematical models and validated experimentally in single cell organisms. We review existing evidence implicating feed-forward loops as important in controlling clinically relevant transcriptional responses to estrogens, progestins, and glucocorticoids, among other NR ligands. We propose that feed-forward transcriptional circuits are a major mechanism through which NRs integrate signals, exert temporal control over gene regulation, and compartmentalize client transcriptomes into discrete subunits. Implications for the design and function of novel selective NR ligands are discussed. Copyright © 2014 Elsevier Inc. All rights reserved.
Determination of equivalent sound speed profiles for ray tracing in near-ground sound propagation.
Prospathopoulos, John M; Voutsinas, Spyros G
2007-09-01
The determination of appropriate sound speed profiles in the modeling of near-ground propagation using a ray tracing method is investigated using a ray tracing model which is capable of performing axisymmetric calculations of the sound field around an isolated source. Eigenrays are traced using an iterative procedure which integrates the trajectory equations for each ray launched from the source at a specific direction. The calculation of sound energy losses is made by introducing appropriate coefficients to the equations representing the effect of ground and atmospheric absorption and the interaction with the atmospheric turbulence. The model is validated against analytical and numerical predictions of other methodologies for simple cases, as well as against measurements for nonrefractive atmospheric environments. A systematic investigation for near-ground propagation in downward and upward refractive atmosphere is made using experimental data. Guidelines for the suitable simulation of the wind velocity profile are derived by correlating predictions with measurements.
Computing Prediction and Functional Analysis of Prokaryotic Propionylation.
Wang, Li-Na; Shi, Shao-Ping; Wen, Ping-Ping; Zhou, Zhi-You; Qiu, Jian-Ding
2017-11-27
Identification and systematic analysis of candidates for protein propionylation are crucial steps for understanding its molecular mechanisms and biological functions. Although several proteome-scale methods have been performed to delineate potential propionylated proteins, the majority of lysine-propionylated substrates and their role in pathological physiology still remain largely unknown. By gathering various databases and literatures, experimental prokaryotic propionylation data were collated to be trained in a support vector machine with various features via a three-step feature selection method. A novel online tool for seeking potential lysine-propionylated sites (PropSeek) ( http://bioinfo.ncu.edu.cn/PropSeek.aspx ) was built. Independent test results of leave-one-out and n-fold cross-validation were similar to each other, showing that PropSeek is a stable and robust predictor with satisfying performance. Meanwhile, analyses of Gene Ontology, Kyoto Encyclopedia of Genes and Genomes pathways, and protein-protein interactions implied a potential role of prokaryotic propionylation in protein synthesis and metabolism.
Determining the Localization of Carbohydrate Active Enzymes Within Gram-Negative Bacteria.
McLean, Richard; Inglis, G Douglas; Mosimann, Steven C; Uwiera, Richard R E; Abbott, D Wade
2017-01-01
Investigating the subcellular location of secreted proteins is valuable for illuminating their biological function. Although several bioinformatics programs currently exist to predict the destination of a trafficked protein using its signal peptide sequence, these programs have limited accuracy and often require experimental validation. Here, we present a systematic method to fractionate gram-negative cells and characterize the subcellular localization of secreted carbohydrate active enzymes (CAZymes). This method involves four parallel approaches that reveal the relative abundance of protein within the cytoplasm, periplasm, outer membrane, and extracellular environment. Cytoplasmic and periplasmic proteins are fractionated by lysis and osmotic shock, respectively. Outer membrane bound proteins are determined by comparing cells before and after exoproteolytic digestion. Extracellularly secreted proteins are collected from the media and concentrated. These four different fractionations can then be probed for the presence and quantity of target proteins using immunochemical methods such as Western blots and ELISAs, or enzyme activity assays.
Cold Cracking During Direct-Chill Casting
NASA Astrophysics Data System (ADS)
Eskin, D. G.; Lalpoor, M.; Katgerman, L.
Cold cracking phenomenon is the least studied, yet very important defect occurring during direct chill casting. The spontaneous nature of this defect makes its systematic study almost impossible, and the computer simulation of the thermomechanical behavior of the ingot during its cooling after the end of solidification requires constitutive parameters of high-strength aluminum alloys in the as-cast condition, which are not readily available. In this paper we describe constitutive behavior of high strength 7xxx series aluminum alloys in the as-cast condition based on experimentally measured tensile properties at different strain rates and temperatures, plane strain fracture toughness at different temperatures, and thermal contraction. In addition, fracture and structure of the specimens and real cold-cracked billets are examined. As a result a fracture-mechanics-based criterion of cold cracking is suggested based on the critical crack length, and is validated upon pilot-scale billet casting.
A polyvalent harmonic coil testing method for small-aperture magnets
NASA Astrophysics Data System (ADS)
Arpaia, Pasquale; Buzio, Marco; Golluccio, Giancarlo; Walckiers, Louis
2012-08-01
A method to characterize permanent and fast-pulsed iron-dominated magnets with small apertures is presented. The harmonic coil measurement technique is enhanced specifically for small-aperture magnets by (1) in situ calibration, for facing search-coil production inaccuracy, (2) rotating the magnet around its axis, for correcting systematic effects, and (3) measuring magnetic fluxes by stationary coils at different angular positions for measuring fast pulsed magnets. This method allows a quadrupole magnet for particle accelerators to be characterized completely, by assessing multipole field components, magnetic axis position, and field direction. In this paper, initially the metrological problems arising from testing small-aperture magnets are highlighted. Then, the basic ideas of the proposed method and the architecture of the corresponding measurement system are illustrated. Finally, experimental validation results are shown for small-aperture permanent and fast-ramped quadrupole magnets for the new linear accelerator Linac4 at CERN (European Organization for Nuclear Research).
Systematic study of error sources in supersonic skin-friction balance measurements
NASA Technical Reports Server (NTRS)
Allen, J. M.
1976-01-01
An experimental study was performed to investigate potential error sources in data obtained with a self-nulling, moment-measuring, skin-friction balance. The balance was installed in the sidewall of a supersonic wind tunnel, and independent measurements of the three forces contributing to the balance output (skin friction, lip force, and off-center normal force) were made for a range of gap size and element protrusion. The relatively good agreement between the balance data and the sum of these three independently measured forces validated the three-term model used. No advantage to a small gap size was found; in fact, the larger gaps were preferable. Perfect element alignment with the surrounding test surface resulted in very small balance errors. However, if small protrusion errors are unavoidable, no advantage was found in having the element slightly below the surrounding test surface rather than above it.
Schulz, Eric; Cokely, Edward T; Feltz, Adam
2011-12-01
Many philosophers appeal to intuitions to support some philosophical views. However, there is reason to be concerned about this practice as scientific evidence has documented systematic bias in philosophically relevant intuitions as a function of seemingly irrelevant features (e.g., personality). One popular defense used to insulate philosophers from these concerns holds that philosophical expertise eliminates the influence of these extraneous factors. Here, we test this assumption. We present data suggesting that verifiable philosophical expertise in the free will debate-as measured by a reliable and validated test of expert knowledge-does not eliminate the influence of one important extraneous feature (i.e., the heritable personality trait extraversion) on judgments concerning freedom and moral responsibility. These results suggest that, in at least some important cases, the expertise defense fails. Implications for the practice of philosophy, experimental philosophy, and applied ethics are discussed. Copyright © 2011 Elsevier Inc. All rights reserved.
Anharmonic Vibrational Spectroscopy on Metal Transition Complexes
NASA Astrophysics Data System (ADS)
Latouche, Camille; Bloino, Julien; Barone, Vincenzo
2014-06-01
Advances in hardware performance and the availability of efficient and reliable computational models have made possible the application of computational spectroscopy to ever larger molecular systems. The systematic interpretation of experimental data and the full characterization of complex molecules can then be facilitated. Focusing on vibrational spectroscopy, several approaches have been proposed to simulate spectra beyond the double harmonic approximation, so that more details become available. However, a routine use of such tools requires the preliminary definition of a valid protocol with the most appropriate combination of electronic structure and nuclear calculation models. Several benchmark of anharmonic calculations frequency have been realized on organic molecules. Nevertheless, benchmarks of organometallics or inorganic metal complexes at this level are strongly lacking despite the interest of these systems due to their strong emission and vibrational properties. Herein we report the benchmark study realized with anharmonic calculations on simple metal complexes, along with some pilot applications on systems of direct technological or biological interest.
Performance characterization of material identification systems
NASA Astrophysics Data System (ADS)
Brown, Christopher D.; Green, Robert L.
2006-10-01
In recent years a number of analytical devices have been proposed and marketed specifically to enable field-based material identification. Technologies reliant on mass, near- and mid-infrared, and Raman spectroscopies are available today, and other platforms are imminent. These systems tend to perform material recognition based on an on-board library of material signatures. While figures of merit for traditional quantitative analytical sensors are broadly established (e.g., SNR, selectivity, sensitivity, limit of detection/decision), measures of performance for material identification systems have not been systematically discussed. In this paper we present an approach to performance characterization similar in spirit to ROC curves, but including elements of precision-recall curves and specialized for the intended-use of material identification systems. Important experimental considerations are discussed, including study design, sources of bias, uncertainty estimation, and cross-validation and the approach as a whole is illustrated using a commercially available handheld Raman material identification system.
Non-overlap subaperture interferometric testing for large optics
NASA Astrophysics Data System (ADS)
Wu, Xin; Yu, Yingjie; Zeng, Wenhan; Qi, Te; Chen, Mingyi; Jiang, Xiangqian
2017-08-01
It has been shown that the number of subapertures and the amount of overlap has a significant influence on the stitching accuracy. In this paper, a non-overlap subaperture interferometric testing method (NOSAI) is proposed to inspect large optical components. This method would greatly reduce the number of subapertures and the influence of environmental interference while maintaining the accuracy of reconstruction. A general subaperture distribution pattern of NOSAI is also proposed for the large rectangle surface. The square Zernike polynomial is employed to fit such wavefront. The effect of the minimum fitting terms on the accuracy of NOSAI and the sensitivities of NOSAI to subaperture's alignment error, power systematic error, and random noise are discussed. Experimental results validate the feasibility and accuracy of the proposed NOSAI in comparison with wavefront obtained by a large aperture interferometer and stitching surface by multi-aperture overlap-scanning technique (MAOST).
Mapping the ecological networks of microbial communities.
Xiao, Yandong; Angulo, Marco Tulio; Friedman, Jonathan; Waldor, Matthew K; Weiss, Scott T; Liu, Yang-Yu
2017-12-11
Mapping the ecological networks of microbial communities is a necessary step toward understanding their assembly rules and predicting their temporal behavior. However, existing methods require assuming a particular population dynamics model, which is not known a priori. Moreover, those methods require fitting longitudinal abundance data, which are often not informative enough for reliable inference. To overcome these limitations, here we develop a new method based on steady-state abundance data. Our method can infer the network topology and inter-taxa interaction types without assuming any particular population dynamics model. Additionally, when the population dynamics is assumed to follow the classic Generalized Lotka-Volterra model, our method can infer the inter-taxa interaction strengths and intrinsic growth rates. We systematically validate our method using simulated data, and then apply it to four experimental data sets. Our method represents a key step towards reliable modeling of complex, real-world microbial communities, such as the human gut microbiota.
Tailored Welding Technique for High Strength Al-Cu Alloy for Higher Mechanical Properties
NASA Astrophysics Data System (ADS)
Biradar, N. S.; Raman, R.
AA2014 aluminum alloy, with 4.5% Cu as major alloying element, offers highest strength and hardness values in T6 temper and finds extensive use in aircraft primary structures. However, this alloy is difficult to weld by fusion welding because the dendritic structure formed can affect weld properties seriously. Among the welding processes, AC-TIG technique is largely used for welding. As welded yield strength was in the range of 190-195 MPa, using conventional TIG technique. Welding metallurgy of AA2014 was critically reviewed and factors responsible for lower properties were identified. Square-wave AC TIG with Transverse mechanical arc oscillation (TMAO) was postulated to improve the weld strength. A systematic experimentation using 4 mm thick plates produced YS in the range of 230-240 MPa, has been achieved. Through characterization including optical and SEM/EDX was conducted to validate the metallurgical phenomena attributable to improvement in weld properties.
Temporal controls of the asymmetric cell division cycle in Caulobacter crescentus.
Li, Shenghua; Brazhnik, Paul; Sobral, Bruno; Tyson, John J
2009-08-01
The asymmetric cell division cycle of Caulobacter crescentus is orchestrated by an elaborate gene-protein regulatory network, centered on three major control proteins, DnaA, GcrA and CtrA. The regulatory network is cast into a quantitative computational model to investigate in a systematic fashion how these three proteins control the relevant genetic, biochemical and physiological properties of proliferating bacteria. Different controls for both swarmer and stalked cell cycles are represented in the mathematical scheme. The model is validated against observed phenotypes of wild-type cells and relevant mutants, and it predicts the phenotypes of novel mutants and of known mutants under novel experimental conditions. Because the cell cycle control proteins of Caulobacter are conserved across many species of alpha-proteobacteria, the model we are proposing here may be applicable to other genera of importance to agriculture and medicine (e.g., Rhizobium, Brucella).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mruczkiewicz, M.; Krawczyk, M.
2014-03-21
We study the effect of one-side metallization of a uniform ferromagnetic thin film on its spin-wave dispersion relation in the Damon–Eshbach geometry. Due to the finite conductivity of the metallic cover layer on the ferromagnetic film, the spin-wave dispersion relation may be nonreciprocal only in a limited wave-vector range. We provide an approximate analytical solution for the spin-wave frequency, discuss its validity, and compare it with numerical results. The dispersion is analyzed systematically by varying the parameters of the ferromagnetic film, the metal cover layer and the value of the external magnetic field. The conclusions drawn from this analysis allowmore » us to define a structure based on a 30 nm thick CoFeB film with an experimentally accessible nonreciprocal dispersion relation in a relatively wide wave-vector range.« less
Modeling Urban Scenarios & Experiments: Fort Indiantown Gap Data Collections Summary and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archer, Daniel E.; Bandstra, Mark S.; Davidson, Gregory G.
This report summarizes experimental radiation detector, contextual sensor, weather, and global positioning system (GPS) data collected to inform and validate a comprehensive, operational radiation transport modeling framework to evaluate radiation detector system and algorithm performance. This framework will be used to study the influence of systematic effects (such as geometry, background activity, background variability, environmental shielding, etc.) on detector responses and algorithm performance using synthetic time series data. This work consists of performing data collection campaigns at a canonical, controlled environment for complete radiological characterization to help construct and benchmark a high-fidelity model with quantified system geometries, detector response functions,more » and source terms for background and threat objects. This data also provides an archival, benchmark dataset that can be used by the radiation detection community. The data reported here spans four data collection campaigns conducted between May 2015 and September 2016.« less
2014-04-15
SINGLE CYLINDER DIESEL ENGINE Amit Shrestha, Umashankar Joshi, Ziliang Zheng, Tamer Badawy, Naeim A. Henein, Wayne State University, Detroit, MI, USA...13-03-2014 4. TITLE AND SUBTITLE EXPERIMENTAL VALIDATION AND COMBUSTION MODELING OF A JP-8 SURROGATE IN A SINGLE CYLINDER DIESEL ENGINE 5a...INTERNATIONAL UNCLASSIFIED • Validate a two-component JP-8 surrogate in a single cylinder diesel engine. Validation parameters include – Ignition delay
Anxiety measures validated in perinatal populations: a systematic review.
Meades, Rose; Ayers, Susan
2011-09-01
Research and screening of anxiety in the perinatal period is hampered by a lack of psychometric data on self-report anxiety measures used in perinatal populations. This paper aimed to review self-report measures that have been validated with perinatal women. A systematic search was carried out of four electronic databases. Additional papers were obtained through searching identified articles. Thirty studies were identified that reported validation of an anxiety measure with perinatal women. Most commonly validated self-report measures were the General Health Questionnaire (GHQ), State-Trait Anxiety Inventory (STAI), and Hospital Anxiety and Depression Scales (HADS). Of the 30 studies included, 11 used a clinical interview to provide criterion validity. Remaining studies reported one or more other forms of validity (factorial, discriminant, concurrent and predictive) or reliability. The STAI shows criterion, discriminant and predictive validity and may be most useful for research purposes as a specific measure of anxiety. The Kessler 10 (K-10) may be the best short screening measure due to its ability to differentiate anxiety disorders. The Depression Anxiety Stress Scales 21 (DASS-21) measures multiple types of distress, shows appropriate content, and remains to be validated against clinical interview in perinatal populations. Nineteen studies did not report sensitivity or specificity data. The early stages of research into perinatal anxiety, the multitude of measures in use, and methodological differences restrict comparison of measures across studies. There is a need for further validation of self-report measures of anxiety in the perinatal period to enable accurate screening and detection of anxiety symptoms and disorders. Copyright © 2010 Elsevier B.V. All rights reserved.
Confidence in outcome estimates from systematic reviews used in informed consent.
Fritz, Robert; Bauer, Janet G; Spackman, Sue S; Bains, Amanjyot K; Jetton-Rangel, Jeanette
2016-12-01
Evidence-based dentistry now guides informed consent in which clinicians are obliged to provide patients with the most current, best evidence, or best estimates of outcomes, of regimens, therapies, treatments, procedures, materials, and equipment or devices when developing personal oral health care, treatment plans. Yet, clinicians require that the estimates provided from systematic reviews be verified to their validity, reliability, and contextualized as to performance competency so that clinicians may have confidence in explaining outcomes to patients in clinical practice. The purpose of this paper was to describe types of informed estimates from which clinicians may have confidence in their capacity to assist patients in competent decision-making, one of the most important concepts of informed consent. Using systematic review methodology, researchers provide clinicians with valid best estimates of outcomes regarding a subject of interest from best evidence. Best evidence is verified through critical appraisals using acceptable sampling methodology either by scoring instruments (Timmer analysis) or checklist (grade), a Cochrane Collaboration standard that allows transparency in open reviews. These valid best estimates are then tested for reliability using large databases. Finally, valid and reliable best estimates are assessed for meaning using quantification of margins and uncertainties. Through manufacturer and researcher specifications, quantification of margins and uncertainties develops a performance competency continuum by which valid, reliable best estimates may be contextualized for their performance competency: at a lowest margin performance competency (structural failure), high margin performance competency (estimated true value of success), or clinically determined critical values (clinical failure). Informed consent may be achieved when clinicians are confident of their ability to provide useful and accurate best estimates of outcomes regarding regimens, therapies, treatments, and equipment or devices to patients in their clinical practices and when developing personal, oral health care, treatment plans. Copyright © 2016 Elsevier Inc. All rights reserved.
Heinl, D; Prinsen, C A C; Sach, T; Drucker, A M; Ofenloch, R; Flohr, C; Apfelbacher, C
2017-04-01
Quality of life (QoL) is one of the core outcome domains identified by the Harmonising Outcome Measures for Eczema (HOME) initiative to be assessed in every eczema trial. There is uncertainty about the most appropriate QoL instrument to measure this domain in infants, children and adolescents. To systematically evaluate the measurement properties of existing measurement instruments developed and/or validated for the measurement of QoL in infants, children and adolescents with eczema. A systematic literature search in PubMed and Embase, complemented by a thorough hand search of reference lists, retrieved studies on measurement properties of eczema QoL instruments for infants, children and adolescents. For all eligible studies, we judged the adequacy of the measurement properties and the methodological study quality with the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist. Results from different studies were summarized in a best-evidence synthesis and formed the basis to assign four degrees of recommendation. Seventeen articles, three of which were found by hand search, were included. These 17 articles reported on 24 instruments. No instrument can be recommended for use in all eczema trials because none fulfilled all required adequacy criteria. With adequate internal consistency, reliability and hypothesis testing, the U.S. version of the Childhood Atopic Dermatitis Impact Scale (CADIS), a proxy-reported instrument, has the potential to be recommended depending on the results of further validation studies. All other instruments, including all self-reported ones, lacked significant validation data. Currently, no QoL instrument for infants, children and adolescents with eczema can be highly recommended. Future validation research should primarily focus on the CADIS, but also attempt to broaden the evidence base for the validity of self-reported instruments. © 2016 British Association of Dermatologists.
Mundy, Lily R; Miller, H Catherine; Klassen, Anne F; Cano, Stefan J; Pusic, Andrea L
2016-10-01
Patient-reported outcomes (PROs) are of growing importance in research and clinical care and may be used as primary outcomes or as compliments to traditional surgical outcomes. In assessing the impact of surgical and traumatic scars, PROs are often the most meaningful. To assess outcomes from the patient perspective, rigorously developed and validated PRO instruments are essential. The authors conducted a systematic literature review to identify PRO instruments developed and/or validated for patients with surgical and/or non-burn traumatic scars. Identified instruments were assessed for content, development process, and validation under recommended guidelines for PRO instrument development. The systematic review identified 6534 articles. After review, we identified four PRO instruments meeting inclusion criteria: patient and observer scar assessment scale (POSAS), bock quality of life questionnaire for patients with keloid and hypertrophic scarring (Bock), patient scar assessment questionnaire (PSAQ), and patient-reported impact of scars measure (PRISM). Common concepts measured were symptoms and psychosocial well-being. Only PSAQ had a dedicated appearance domain. Qualitative data were used to inform content for the PSAQ and PRISM, and a modern psychometric approach (Rasch Measurement Theory) was used to develop PRISM and to test POSAS. Overall, PRISM demonstrated the most rigorous design and validation process, however, was limited by the lack of a dedicated appearance domain. PRO instruments to evaluate outcomes in scars exist but vary in terms of concepts measured and psychometric soundness. This review discusses the strengths and weaknesses of existing instruments, highlighting the need for future scar-focused PRO instrument development. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Schmid, R; Eschen, A; Rüegger-Frey, B; Martin, M
2013-06-01
There is growing evidence that individuals with cognitive impairment and dementia require systematic assessment of needs for the selection of optimal treatments. Currently no valid instrument is applicable for illness-related need assessment in this growing population. The purpose of this study was to develop and validate a new instrument ("Bedürfnisinventar bei Gedächtnisstörungen", BIG-65) that systematically assesses illness-related needs. The development was based on an adequate theoretical framework and standardised procedural guidelines and validated to an appropriate sample of individuals attending a Swiss memory clinic (n = 83). The BIG-65 provides a comprehensive range of biopsychosocial and environmental needs items and offers a dementia-friendly structure for the assessment of illness-related needs. The BIG-65 has high face validity and very high test-retest reliability (rtt = 0,916). On average 3.5 (SD = 3.7) unmet needs were assessed. Most frequently mentioned needs were: "forget less" (50%), "better concentration" (23.2%), "information on illness" (20.7%), "information on treatments" (17.1%), "less worry", "less irritable", "improve mood", "improve orientation" (13.4% each). Needs profiles differed between patients with preclinical (subjective cognitive impairment, mild cognitive impairment) and clinical (dementia) diagnosis. The BIG-65 reliably assesses illness-related needs in individuals with moderate dementia. With decreasing cognitive functions or an MMSE <20 points, additional methods such as observation of the emotional expression may be applied. According to our results, individuals with cognitive impairment and dementia pursue individual strategies to stabilize their quality of life level. In addition to the assessment of objective illness symptoms the selection of optimal treatments may profit from a systematic needs assessment to optimally support patients in their individual quality of life strategies.
Screening emergency department patients for opioid drug use: A qualitative systematic review.
Sahota, Preet Kaur; Shastry, Siri; Mukamel, Dana B; Murphy, Linda; Yang, Narisu; Lotfipour, Shahram; Chakravarthy, Bharath
2018-05-24
The opioid drug epidemic is a major public health concern and an economic burden in the United States. The purpose of this systematic review is to assess the reliability and validity of screening instruments used in emergency medicine settings to detect opioid use in patients and to assess psychometric data for each screening instrument. PubMed/MEDLINE, PsycINFO, Cochrane Database of Systematic Reviews, Cochrane Central Register of Controlled Trials, Web of Science, Cumulative Index to Nursing and Allied Health Literature and ClinicalTrials.gov were searched for articles published up to May 2018. The extracted articles were independently screened for eligibility by two reviewers. We extracted 1555 articles for initial screening and 95 articles were assessed for full-text eligibility. Six articles were extracted from the full-text assessment. Six instruments were identified from the final article list: Screener and Opioid Assessment for Patients with Pain - Revised; Drug Abuse Screening Test; Opioid Risk Tool; Current Opioid Misuse Measure; an Emergency Medicine Providers Clinician Assessment Questionnaire; and an Emergency Provider Impression Data Collection Form. Screening instrument characteristics, and reliability and validity data were extracted from the six studies. A meta-analysis was not conducted due to heterogeneity between the studies. There is a lack of validity and reliability evidence in all six articles; and sensitivity, specificity and predictive values varied between the different instruments. These instruments cannot be validated for use in emergency medicine settings. There is no clear evidence to state which screening instruments are appropriate for use in detecting opioid use disorders in emergency medicine patients. There is a need for brief, reliable, valid and feasible opioid use screening instruments in the emergency medicine setting. Copyright © 2018 Elsevier Ltd. All rights reserved.
van Bokhorst-de van der Schueren, Marian A E; Guaitoli, Patrícia Realino; Jansma, Elise P; de Vet, Henrica C W
2014-02-01
Numerous nutrition screening tools for the hospital setting have been developed. The aim of this systematic review is to study construct or criterion validity and predictive validity of nutrition screening tools for the general hospital setting. A systematic review of English, French, German, Spanish, Portuguese and Dutch articles identified via MEDLINE, Cinahl and EMBASE (from inception to the 2nd of February 2012). Additional studies were identified by checking reference lists of identified manuscripts. Search terms included key words for malnutrition, screening or assessment instruments, and terms for hospital setting and adults. Data were extracted independently by 2 authors. Only studies expressing the (construct, criterion or predictive) validity of a tool were included. 83 studies (32 screening tools) were identified: 42 studies on construct or criterion validity versus a reference method and 51 studies on predictive validity on outcome (i.e. length of stay, mortality or complications). None of the tools performed consistently well to establish the patients' nutritional status. For the elderly, MNA performed fair to good, for the adults MUST performed fair to good. SGA, NRS-2002 and MUST performed well in predicting outcome in approximately half of the studies reviewed in adults, but not in older patients. Not one single screening or assessment tool is capable of adequate nutrition screening as well as predicting poor nutrition related outcome. Development of new tools seems redundant and will most probably not lead to new insights. New studies comparing different tools within one patient population are required. Copyright © 2013 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Gobin, Oliver C; Schüth, Ferdi
2008-01-01
Genetic algorithms are widely used to solve and optimize combinatorial problems and are more often applied for library design in combinatorial chemistry. Because of their flexibility, however, their implementation can be challenging. In this study, the influence of the representation of solid catalysts on the performance of genetic algorithms was systematically investigated on the basis of a new, constrained, multiobjective, combinatorial test problem with properties common to problems in combinatorial materials science. Constraints were satisfied by penalty functions, repair algorithms, or special representations. The tests were performed using three state-of-the-art evolutionary multiobjective algorithms by performing 100 optimization runs for each algorithm and test case. Experimental data obtained during the optimization of a noble metal-free solid catalyst system active in the selective catalytic reduction of nitric oxide with propene was used to build up a predictive model to validate the results of the theoretical test problem. A significant influence of the representation on the optimization performance was observed. Binary encodings were found to be the preferred encoding in most of the cases, and depending on the experimental test unit, repair algorithms or penalty functions performed best.
Randomization Does Not Help Much, Comparability Does
Saint-Mont, Uwe
2015-01-01
According to R.A. Fisher, randomization “relieves the experimenter from the anxiety of considering innumerable causes by which the data may be disturbed.” Since, in particular, it is said to control for known and unknown nuisance factors that may considerably challenge the validity of a result, it has become very popular. This contribution challenges the received view. First, looking for quantitative support, we study a number of straightforward, mathematically simple models. They all demonstrate that the optimism surrounding randomization is questionable: In small to medium-sized samples, random allocation of units to treatments typically yields a considerable imbalance between the groups, i.e., confounding due to randomization is the rule rather than the exception. In the second part of this contribution, the reasoning is extended to a number of traditional arguments in favour of randomization. This discussion is rather non-technical, and sometimes touches on the rather fundamental Frequentist/Bayesian debate. However, the result of this analysis turns out to be quite similar: While the contribution of randomization remains doubtful, comparability contributes much to a compelling conclusion. Summing up, classical experimentation based on sound background theory and the systematic construction of exchangeable groups seems to be advisable. PMID:26193621
NASA Astrophysics Data System (ADS)
Xu, Chi; Senaratne, Charutha L.; Culbertson, Robert J.; Kouvetakis, John; Menéndez, José
2017-09-01
The compositional dependence of the lattice parameter in Ge1-ySny alloys has been determined from combined X-ray diffraction and Rutherford Backscattering (RBS) measurements of a large set of epitaxial films with compositions in the 0 < y < 0.14 range. In view of contradictory prior results, a critical analysis of this method has been carried out, with emphasis on nonlinear elasticity corrections and systematic errors in popular RBS simulation codes. The approach followed is validated by showing that measurements of Ge1-xSix films yield a bowing parameter θGeSi =-0.0253(30) Å, in excellent agreement with the classic work by Dismukes. When the same methodology is applied to Ge1-ySny alloy films, it is found that the bowing parameter θGeSn is zero within experimental error, so that the system follows Vegard's law. This is in qualitative agreement with ab initio theory, but the value of the experimental bowing parameter is significantly smaller than the theoretical prediction. Possible reasons for this discrepancy are discussed in detail.
NASA Astrophysics Data System (ADS)
Palleri, Francesca; Baruffaldi, Fabio; Angelini, Anna Lisa; Ferri, Andrea; Spezi, Emiliano
2008-12-01
In external beam radiotherapy the calculation of dose distribution for patients with hip prostheses is critical. Metallic implants not only degrade the image quality but also perturb the dose distribution. Conventional treatment planning systems do not accurately account for high-Z prosthetic implants heterogeneities, especially at interfaces. The materials studied in this work have been chosen on the basis of a statistical investigation on the hip prostheses implanted in 70 medical centres. The first aim of this study is a systematic characterization of materials used for hip prostheses, and it has been provided by BEAMnrc Monte Carlo code. The second aim is to evaluate the capabilities of a specific treatment planning system, Pinnacle 3, when dealing with dose calculations in presence of metals, also close to the regions of high-Z gradients. In both cases it has been carried out an accurate comparison versus experimental measurements for two clinical photon beam energies (6 MV and 18 MV) and for two experimental sets-up: metallic cylinders inserted in a water phantom and in a specifically built PMMA slab. Our results show an agreement within 2% between experiments and MC simulations. TPS calculations agree with experiments within 3%.
Investigation of systematic effects in atmospheric microthermal probe data
NASA Astrophysics Data System (ADS)
Roper, Daniel S.
1992-12-01
The propagation of electromagnetic radiation through the atmosphere is a crucial aspect of laser target acquisition and surveillance systems and is vital to the effective implementation of some Theater Missile Defense systems. Atmospheric turbulence degrades the image or laser beam quality along an optical path. During the past decade, the U.S. Air Force's Geophysics Directorate of Phillips Laboratory collected high speed differential temperature measurements of the atmospheric temperature structure parameter, C sub(t exp 2), and the related index of refraction structure parameter, C sub(n exp 2). The stratospheric results show a 1-2 order of magnitude increase in day turbulence values compared to night. Resolving whether these results were real or an artifact of solar contamination is a critical Theater Missile Defense issue. This thesis analyzed the thermosonde data from an experimental program conducted by the Geophysics Directorate in December 1990 and found strong evidence of solar induced artifacts in the daytime thermal probe data. In addition, this thesis performed a theoretical analysis of the thermal response versus altitude of fine wire probes being used in a new thermosonde system under development at the Naval Postgraduate School. Experimental wind tunnel measurements were conducted to validate the analytical predictions.
Virtual Reality for Research in Social Neuroscience
Parsons, Thomas D.; Gaggioli, Andrea; Riva, Giuseppe
2017-01-01
The emergence of social neuroscience has significantly advanced our understanding of the relationship that exists between social processes and their neurobiological underpinnings. Social neuroscience research often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and social interactions. Whilst this research has merit, there is a growing interest in the presentation of dynamic stimuli in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Herein, we discuss the potential of virtual reality for enhancing ecological validity while maintaining experimental control in social neuroscience research. Virtual reality is a technology that allows for the creation of fully interactive, three-dimensional computerized models of social situations that can be fully controlled by the experimenter. Furthermore, the introduction of interactive virtual characters—either driven by a human or by a computer—allows the researcher to test, in a systematic and independent manner, the effects of various social cues. We first introduce key technical features and concepts related to virtual reality. Next, we discuss the potential of this technology for enhancing social neuroscience protocols, drawing on illustrative experiments from the literature. PMID:28420150
Reliability of infarct volumetry: Its relevance and the improvement by a software-assisted approach.
Friedländer, Felix; Bohmann, Ferdinand; Brunkhorst, Max; Chae, Ju-Hee; Devraj, Kavi; Köhler, Yvette; Kraft, Peter; Kuhn, Hannah; Lucaciu, Alexandra; Luger, Sebastian; Pfeilschifter, Waltraud; Sadler, Rebecca; Liesz, Arthur; Scholtyschik, Karolina; Stolz, Leonie; Vutukuri, Rajkumar; Brunkhorst, Robert
2017-08-01
Despite the efficacy of neuroprotective approaches in animal models of stroke, their translation has so far failed from bench to bedside. One reason is presumed to be a low quality of preclinical study design, leading to bias and a low a priori power. In this study, we propose that the key read-out of experimental stroke studies, the volume of the ischemic damage as commonly measured by free-handed planimetry of TTC-stained brain sections, is subject to an unrecognized low inter-rater and test-retest reliability with strong implications for statistical power and bias. As an alternative approach, we suggest a simple, open-source, software-assisted method, taking advantage of automatic-thresholding techniques. The validity and the improvement of reliability by an automated method to tMCAO infarct volumetry are demonstrated. In addition, we show the probable consequences of increased reliability for precision, p-values, effect inflation, and power calculation, exemplified by a systematic analysis of experimental stroke studies published in the year 2015. Our study reveals an underappreciated quality problem in translational stroke research and suggests that software-assisted infarct volumetry might help to improve reproducibility and therefore the robustness of bench to bedside translation.
Virtual Reality for Research in Social Neuroscience.
Parsons, Thomas D; Gaggioli, Andrea; Riva, Giuseppe
2017-04-16
The emergence of social neuroscience has significantly advanced our understanding of the relationship that exists between social processes and their neurobiological underpinnings. Social neuroscience research often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and social interactions. Whilst this research has merit, there is a growing interest in the presentation of dynamic stimuli in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Herein, we discuss the potential of virtual reality for enhancing ecological validity while maintaining experimental control in social neuroscience research. Virtual reality is a technology that allows for the creation of fully interactive, three-dimensional computerized models of social situations that can be fully controlled by the experimenter. Furthermore, the introduction of interactive virtual characters-either driven by a human or by a computer-allows the researcher to test, in a systematic and independent manner, the effects of various social cues. We first introduce key technical features and concepts related to virtual reality. Next, we discuss the potential of this technology for enhancing social neuroscience protocols, drawing on illustrative experiments from the literature.
Webb, R. Chad; Ma, Yinji; Krishnan, Siddharth; Li, Yuhang; Yoon, Stephen; Guo, Xiaogang; Feng, Xue; Shi, Yan; Seidel, Miles; Cho, Nam Heon; Kurniawan, Jonas; Ahad, James; Sheth, Niral; Kim, Joseph; Taylor VI, James G.; Darlington, Tom; Chang, Ken; Huang, Weizhong; Ayers, Joshua; Gruebele, Alexander; Pielak, Rafal M.; Slepian, Marvin J.; Huang, Yonggang; Gorbach, Alexander M.; Rogers, John A.
2015-01-01
Continuous monitoring of variations in blood flow is vital in assessing the status of microvascular and macrovascular beds for a wide range of clinical and research scenarios. Although a variety of techniques exist, most require complete immobilization of the subject, thereby limiting their utility to hospital or clinical settings. Those that can be rendered in wearable formats suffer from limited accuracy, motion artifacts, and other shortcomings that follow from an inability to achieve intimate, noninvasive mechanical linkage of sensors with the surface of the skin. We introduce an ultrathin, soft, skin-conforming sensor technology that offers advanced capabilities in continuous and precise blood flow mapping. Systematic work establishes a set of experimental procedures and theoretical models for quantitative measurements and guidelines in design and operation. Experimental studies on human subjects, including validation with measurements performed using state-of-the-art clinical techniques, demonstrate sensitive and accurate assessment of both macrovascular and microvascular flow under a range of physiological conditions. Refined operational modes eliminate long-term drifts and reduce power consumption, thereby providing steps toward the use of this technology for continuous monitoring during daily activities. PMID:26601309
"Bed Side" Human Milk Analysis in the Neonatal Intensive Care Unit: A Systematic Review.
Fusch, Gerhard; Kwan, Celia; Kotrri, Gynter; Fusch, Christoph
2017-03-01
Human milk analyzers can measure macronutrient content in native breast milk to tailor adequate supplementation with fortifiers. This article reviews all studies using milk analyzers, including (i) evaluation of devices, (ii) the impact of different conditions on the macronutrient analysis of human milk, and (iii) clinical trials to improve growth. Results lack consistency, potentially due to systematic errors in the validation of the device, or pre-analytical sample preparation errors like homogenization. It is crucial to introduce good laboratory and clinical practice when using these devices; otherwise a non-validated clinical usage can severely affect growth outcomes of infants. Copyright © 2016 Elsevier Inc. All rights reserved.
Cushion, Christopher; Harvey, Stephen; Muir, Bob; Nelson, Lee
2012-01-01
We outline the evolution of a computerised systematic observation tool and describe the process for establishing the validity and reliability of this new instrument. The Coach Analysis and Interventions System (CAIS) has 23 primary behaviours related to physical behaviour, feedback/reinforcement, instruction, verbal/non-verbal, questioning and management. The instrument also analyses secondary coach behaviour related to performance states, recipient, timing, content and questioning/silence. The CAIS is a multi-dimensional and multi-level mechanism able to provide detailed and contextualised data about specific coaching behaviours occurring in complex and nuanced coaching interventions and environments that can be applied to both practice sessions and competition.
ERIC Educational Resources Information Center
Rogers, Richard; Gillard, Nathan D.; Wooley, Chelsea N.; Kelsey, Katherine R.
2013-01-01
A major strength of the Personality Assessment Inventory (PAI) is its systematic assessment of response styles, including feigned mental disorders. Recently, Mogge, Lepage, Bell, and Ragatz developed and provided the initial validation for the Negative Distortion Scale (NDS). Using rare symptoms as its detection strategy for feigning, the…
THE USE OF RESEARCH RESULTS IN TEACHING SOCIAL WORK PRACTICE.
ERIC Educational Resources Information Center
LAWRENCE, RICHARD G.
BECAUSE THE SUCCESS OF INTERVENTION DEPENDS UPON THE VALIDITY OF THE PROPOSITIONS EMPLOYED, AND BECAUSE SCIENTIFIC RESEARCH ASSURES VALIDITY BY PROVIDING THE MOST SYSTEMATIC AND RIGOROUS ATTENTION TO PROBLEMS, THE UTILIZATION OF RESEARCH IS IMPORTANT TO SOCIAL WORK PRACTICE. SEVERAL FACTORS LIMIT ITS USE--(1) ALTHOUGH CONCEPTS ARE CLEARLY DEFINED…
A Multilevel Bifactor Approach to Construct Validation of Mixed-Format Scales
ERIC Educational Resources Information Center
Wang, Yan; Kim, Eun Sook; Dedrick, Robert F.; Ferron, John M.; Tan, Tony
2018-01-01
Wording effects associated with positively and negatively worded items have been found in many scales. Such effects may threaten construct validity and introduce systematic bias in the interpretation of results. A variety of models have been applied to address wording effects, such as the correlated uniqueness model and the correlated traits and…
ERIC Educational Resources Information Center
Pepper, David; Hodgen, Jeremy; Lamesoo, Katri; Kõiv, Pille; Tolboom, Jos
2018-01-01
Cognitive interviewing (CI) provides a method of systematically collecting validity evidence of response processes for questionnaire items. CI involves a range of techniques for prompting individuals to verbalise their responses to items. One such technique is concurrent verbalisation, as developed in Think Aloud Protocol (TAP). This article…
An Evaluation of the Technical Adequacy of a Revised Measure of Quality Indicators of Transition
ERIC Educational Resources Information Center
Morningstar, Mary E.; Lee, Hyunjoo; Lattin, Dana L.; Murray, Angela K.
2016-01-01
This study confirmed the reliability and validity of the Quality Indicators of Exemplary Transition Programs Needs Assessment-2 (QI-2). Quality transition program indicators were identified through a systematic synthesis of transition research, policies, and program evaluation measures. To verify reliability and validity of the QI-2, we…
ERIC Educational Resources Information Center
Ghadi, Ibrahim; Alwi, Nor Hayati; Bakar, Kamariah Abu; Talib, Othman
2012-01-01
This research aims to evaluate the psychology properties of the construct validity for the Critical Thinking Disposition (CTD) instrument. The CTD instrument consists of 39 Likert-type items measuring seven dispositions, namely analyticity, open-mind, truth-seeking, systematicity, self-confidence inquisitiveness and maturity. The study involves…
Validity of a Scale to Measure Teachers' Attitudes towards Sex Education
ERIC Educational Resources Information Center
de Almeida Reis, Maria Helena; Vilar, Duarte Goncalo Rei
2006-01-01
Despite the current legislation requiring sex education as part of the school curriculum in Portugal, great obstacles to its implementation remain. Furthermore, sex education is far from being systematically administered. Thus, the main interest in our project was to validate a scale that measures teachers' attitudes towards sex education. There…
A Model of Substance Abuse Risk: Adapting to the Sri Lankan Context
ERIC Educational Resources Information Center
Ismail, Anne Chandrika; Seneviratne, Rohini De Alwis; Newcombe, Peter A.; Wanigaratne, Shamil
2009-01-01
This study translated and validated the Substance Use Risk Profile Scale (SURPS) among 13 to 18 year old Sri Lankan adolescents attending school. A standard systematic translation procedure was followed to translate the original SURPS into Sinhala language. A Delphi process was conducted to determine judgmental validity of Sinhala SURPS.…