Sample records for improving calculation interpretation

  1. ClinGen Pathogenicity Calculator: a configurable system for assessing pathogenicity of genetic variants.

    PubMed

    Patel, Ronak Y; Shah, Neethu; Jackson, Andrew R; Ghosh, Rajarshi; Pawliczek, Piotr; Paithankar, Sameer; Baker, Aaron; Riehle, Kevin; Chen, Hailin; Milosavljevic, Sofia; Bizon, Chris; Rynearson, Shawn; Nelson, Tristan; Jarvik, Gail P; Rehm, Heidi L; Harrison, Steven M; Azzariti, Danielle; Powell, Bradford; Babb, Larry; Plon, Sharon E; Milosavljevic, Aleksandar

    2017-01-12

    The success of the clinical use of sequencing based tests (from single gene to genomes) depends on the accuracy and consistency of variant interpretation. Aiming to improve the interpretation process through practice guidelines, the American College of Medical Genetics and Genomics (ACMG) and the Association for Molecular Pathology (AMP) have published standards and guidelines for the interpretation of sequence variants. However, manual application of the guidelines is tedious and prone to human error. Web-based tools and software systems may not only address this problem but also document reasoning and supporting evidence, thus enabling transparency of evidence-based reasoning and resolution of discordant interpretations. In this report, we describe the design, implementation, and initial testing of the Clinical Genome Resource (ClinGen) Pathogenicity Calculator, a configurable system and web service for the assessment of pathogenicity of Mendelian germline sequence variants. The system allows users to enter the applicable ACMG/AMP-style evidence tags for a specific allele with links to supporting data for each tag and generate guideline-based pathogenicity assessment for the allele. Through automation and comprehensive documentation of evidence codes, the system facilitates more accurate application of the ACMG/AMP guidelines, improves standardization in variant classification, and facilitates collaborative resolution of discordances. The rules of reasoning are configurable with gene-specific or disease-specific guideline variations (e.g. cardiomyopathy-specific frequency thresholds and functional assays). The software is modular, equipped with robust application program interfaces (APIs), and available under a free open source license and as a cloud-hosted web service, thus facilitating both stand-alone use and integration with existing variant curation and interpretation systems. The Pathogenicity Calculator is accessible at http://calculator.clinicalgenome.org . By enabling evidence-based reasoning about the pathogenicity of genetic variants and by documenting supporting evidence, the Calculator contributes toward the creation of a knowledge commons and more accurate interpretation of sequence variants in research and clinical care.

  2. Analysis of environmental variation in a Great Plains reservoir using principal components analysis and geographic information systems

    USGS Publications Warehouse

    Long, J.M.; Fisher, W.L.

    2006-01-01

    We present a method for spatial interpretation of environmental variation in a reservoir that integrates principal components analysis (PCA) of environmental data with geographic information systems (GIS). To illustrate our method, we used data from a Great Plains reservoir (Skiatook Lake, Oklahoma) with longitudinal variation in physicochemical conditions. We measured 18 physicochemical features, mapped them using GIS, and then calculated and interpreted four principal components. Principal component 1 (PC1) was readily interpreted as longitudinal variation in water chemistry, but the other principal components (PC2-4) were difficult to interpret. Site scores for PC1-4 were calculated in GIS by summing weighted overlays of the 18 measured environmental variables, with the factor loadings from the PCA as the weights. PC1-4 were then ordered into a landscape hierarchy, an emergent property of this technique, which enabled their interpretation. PC1 was interpreted as a reservoir scale change in water chemistry, PC2 was a microhabitat variable of rip-rap substrate, PC3 identified coves/embayments and PC4 consisted of shoreline microhabitats related to slope. The use of GIS improved our ability to interpret the more obscure principal components (PC2-4), which made the spatial variability of the reservoir environment more apparent. This method is applicable to a variety of aquatic systems, can be accomplished using commercially available software programs, and allows for improved interpretation of the geographic environmental variability of a system compared to using typical PCA plots. ?? Copyright by the North American Lake Management Society 2006.

  3. Interpretation of various radiation backgrounds observed in the gamma-ray spectrometer experiments carried on the Apollo missions and implications for diffuse gamma-ray measurements

    NASA Technical Reports Server (NTRS)

    Dyer, C. S.; Trombka, J. I.; Metzger, A. E.; Seltzer, S. M.; Bielefeld, M. J.; Evans, L. G.

    1975-01-01

    Since the report of a preliminary analysis of cosmic gamma-ray measurements made during the Apollo 15 mission, an improved calculation of the spallation activation contribution has been made including the effects of short-lived spallation fragments, which can extend the correction to 15 MeV. In addition, a difference between Apollo 15 and 16 data enables an electron bremsstrahlung contribution to be calculated. A high level of activation observed in a crystal returned on Apollo 17 indicates a background contribution from secondary neutrons. These calculations and observations enable an improved extraction of spurious components and suggest important improvements for future detectors.

  4. "Magnitude-based inference": a statistical review.

    PubMed

    Welsh, Alan H; Knight, Emma J

    2015-04-01

    We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.

  5. Educational interventions to improve screening mammography interpretation: a randomized, controlled trial

    PubMed Central

    BM, Geller; A, Bogart; PA, Carney; EA, Sickles; RA, Smith; B, Monsees; LW, Bassett; DM, Buist; K, Kerlikowske; T, Onega; B, Yankaskas; S, Haneuse; DA, Hill; M, Wallis; DL, Miglioretti

    2014-01-01

    Purpose Conduct a randomized controlled trial of educational interventions to improve performance of screening mammography interpretation. Materials and Methods We randomly assigned physicians who interpret mammography to one of three groups: (1) self-paced DVD; (2) live, expert-led educational session; or (3) control. The DVD and live interventions used mammography cases of varying difficulty and associated teaching points. Interpretive performance was compared using a pre-/post-test design. Sensitivity, specificity, and positive predictive value (PPV) were calculated relative to two outcomes: cancer status and consensus of three experts about recall, and each were compared using logistic regression adjusting for pre-test performance. Results 102 radiologists completed all aspects of the trial. After adjustment for pre-intervention performance, the odds of improved sensitivity for correctly identifying a lesion relative to expert recall were 1.34 times higher for DVD participants than controls (95% confidence interval [CI]: 1.00, 1.81; P=0.050). The odds of improved PPV for correctly identifying a lesion relative to both expert recall (odds ratio [OR]=1.94, 95% CI: 1.24, 3.05; P=0.004) and cancer status (OR=1.81, 95% CI: 1.01, 3.23; P=0.045) were significantly improved for DVD participants compared to controls with no significant change in specificity. For the live-intervention group, specificity was significantly lower than the control group (OR relative to expert recall=0.80; 95% CI: 0.64, 1.00; P=0.048; OR relative to cancer=0.79; 95% CI: 0.65, 0.95; P=0.015). Conclusion In this randomized controlled trial, the DVD educational intervention resulted in a significant improvement in mammography interpretive screening performance on a test-set, which could translate into improved clinical interpretative performance. PMID:24848854

  6. Educational interventions to improve screening mammography interpretation: a randomized controlled trial.

    PubMed

    Geller, Berta M; Bogart, Andy; Carney, Patricia A; Sickles, Edward A; Smith, Robert; Monsees, Barbara; Bassett, Lawrence W; Buist, Diana M; Kerlikowske, Karla; Onega, Tracy; Yankaskas, Bonnie C; Haneuse, Sebastien; Hill, Deirdre; Wallis, Matthew G; Miglioretti, Diana

    2014-06-01

    The objective of our study was to conduct a randomized controlled trial of educational interventions that were created to improve performance of screening mammography interpretation. We randomly assigned physicians who interpret mammography to one of three groups: self-paced DVD, live expert-led educational seminar, or control. The DVD and seminar interventions used mammography cases of varying difficulty and provided associated teaching points. Interpretive performance was compared using a pretest-posttest design. Sensitivity, specificity, and positive predictive value (PPV) were calculated relative to two outcomes: cancer status and consensus of three experts about recall. The performance measures for each group were compared using logistic regression adjusting for pretest performance. One hundred two radiologists completed all aspects of the trial. After adjustment for preintervention performance, the odds of improved sensitivity for correctly identifying a lesion relative to expert recall were 1.34 times higher for DVD participants than for control subjects (95% CI, 1.00-1.81; p = 0.050). The odds of an improved PPV for correctly identifying a lesion relative to both expert recall (odds ratio [OR] = 1.94; 95% CI, 1.24-3.05; p = 0.004) and cancer status (OR = 1.81; 95% CI, 1.01-3.23; p = 0.045) were significantly improved for DVD participants compared with control subjects, with no significant change in specificity. For the seminar group, specificity was significantly lower than the control group (OR relative to expert recall = 0.80; 95% CI, 0.64-1.00; p = 0.048; OR relative to cancer status = 0.79; 95% CI, 0.65-0.95; p = 0.015). In this randomized controlled trial, the DVD educational intervention resulted in a significant improvement in screening mammography interpretive performance on a test set, which could translate into improved interpretative performance in clinical practice.

  7. Understanding decimal numbers: a foundation for correct calculations.

    PubMed

    Pierce, Robyn U; Steinle, Vicki A; Stacey, Kaye C; Widjaja, Wanty

    2008-01-01

    This paper reports on the effectiveness of an intervention designed to improve nursing students' conceptual understanding of decimal numbers. Results of recent intervention studies have indicated some success at improving nursing students' numeracy through practice in applying procedural rules for calculation and working in real or simulated practical contexts. However, in this we identified a fundamental problem: a significant minority of students had an inadequate understanding of decimal numbers. The intervention aimed to improve nursing students' basic understanding of the size of decimal numbers, so that, firstly, calculation rules are more meaningful, and secondly, students can interpret decimal numbers (whether digital output or results of calculations) sensibly. A well-researched, time-efficient diagnostic instrument was used to identify individuals with an inadequate understanding of decimal numbers. We describe a remedial intervention that resulted in significant improvement on a delayed post-intervention test. We conclude that nurse educators should consider diagnosing and, as necessary, plan for remediation of students' foundational understanding of decimal numbers before teaching procedural rules.

  8. Improving Quality in Teaching Statistics Concepts Using Modern Visualization: The Design and Use of the Flash Application on Pocket PCs

    ERIC Educational Resources Information Center

    Vaughn, Brandon K.; Wang, Pei-Yu

    2009-01-01

    The emergence of technology has led to numerous changes in mathematical and statistical teaching and learning which has improved the quality of instruction and teacher/student interactions. The teaching of statistics, for example, has shifted from mathematical calculations to higher level cognitive abilities such as reasoning, interpretation, and…

  9. Chord length distributions interpretation using a polydispersed population: Modeling and experiments

    NASA Astrophysics Data System (ADS)

    Cameirao, A.; Le Ba, H.; Darbouret, M.; Herri, J.-M.; Peytavy, J.-L.; Glénat, P.

    2012-03-01

    Chord length distributions were measured during the crystallization of gas hydrates in a flow loop. The conditions on the flow loop were similar with the conditions in the marine pipelines. The flow loop was filled with water in oil emulsion and pressurized with methane (7 MPa) at low temperature (277 K). During crystallization water droplets crystallize and agglomerate. The CLD measures were interpreted in a preceding work [Le Ba et al., 2010] [1] by constructing random aggregates with known geometrical proprieties from a monodispersed population of droplets and calculating their CLD. Comparing calculated CLD with CLD from the experiment, the geometrical parameters: number of primary particles and fractal dimension of experimental aggregates are identified. However some differences remained between the experiment and the calculated CLD. In the present work the droplets population was considered polydispersed improving the comparison between the model and the experiment.

  10. “Magnitude-based Inference”: A Statistical Review

    PubMed Central

    Welsh, Alan H.; Knight, Emma J.

    2015-01-01

    ABSTRACT Purpose We consider “magnitude-based inference” and its interpretation by examining in detail its use in the problem of comparing two means. Methods We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how “magnitude-based inference” is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. Results and Conclusions We show that “magnitude-based inference” is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with “magnitude-based inference” and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using “magnitude-based inference,” a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis. PMID:25051387

  11. Vertical or horizontal orientation of foot radiographs does not affect image interpretation

    PubMed Central

    Ferran, Nicholas Antonio; Ball, Luke; Maffulli, Nicola

    2012-01-01

    Summary This study determined whether the orientation of dorsoplantar and oblique foot radiographs has an effect on radiograph interpretation. A test set of 50 consecutive foot radiographs were selected (25 with fractures, and 25 normal), and duplicated in the horizontal orientation. The images were randomly arranged, numbered 1 through 100, and analysed by six image interpreters. Vertical and horizontal area under the ROC curve, accuracy, sensitivity and specificity were calculated for each image interpreter. There was no significant difference in the area under the ROC curve, accuracy, sensitivity or specificity of image interpretation between images viewed in the vertical or horizontal orientation. While conventions for display of radiographs may help to improve the development of an efficient visual search strategy in trainees, and allow for standardisation of publication of radiographic images, variation from the convention in clinical practice does not appear to affect the sensitivity or specificity of image interpretation. PMID:23738310

  12. Averaged kick maps: less noise, more signal…and probably less bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pražnikar, Jure; Afonine, Pavel V.; Gunčar, Gregor

    2009-09-01

    Averaged kick maps are the sum of a series of individual kick maps, where each map is calculated from atomic coordinates modified by random shifts. These maps offer the possibility of an improved and less model-biased map interpretation. Use of reliable density maps is crucial for rapid and successful crystal structure determination. Here, the averaged kick (AK) map approach is investigated, its application is generalized and it is compared with other map-calculation methods. AK maps are the sum of a series of kick maps, where each kick map is calculated from atomic coordinates modified by random shifts. As such, theymore » are a numerical analogue of maximum-likelihood maps. AK maps can be unweighted or maximum-likelihood (σ{sub A}) weighted. Analysis shows that they are comparable and correspond better to the final model than σ{sub A} and simulated-annealing maps. The AK maps were challenged by a difficult structure-validation case, in which they were able to clarify the problematic region in the density without the need for model rebuilding. The conclusion is that AK maps can be useful throughout the entire progress of crystal structure determination, offering the possibility of improved map interpretation.« less

  13. Improved Visualization of Gastrointestinal Slow Wave Propagation Using a Novel Wavefront-Orientation Interpolation Technique.

    PubMed

    Mayne, Terence P; Paskaranandavadivel, Niranchan; Erickson, Jonathan C; OGrady, Gregory; Cheng, Leo K; Angeli, Timothy R

    2018-02-01

    High-resolution mapping of gastrointestinal (GI) slow waves is a valuable technique for research and clinical applications. Interpretation of high-resolution GI mapping data relies on animations of slow wave propagation, but current methods remain as rudimentary, pixelated electrode activation animations. This study aimed to develop improved methods of visualizing high-resolution slow wave recordings that increases ease of interpretation. The novel method of "wavefront-orientation" interpolation was created to account for the planar movement of the slow wave wavefront, negate any need for distance calculations, remain robust in atypical wavefronts (i.e., dysrhythmias), and produce an appropriate interpolation boundary. The wavefront-orientation method determines the orthogonal wavefront direction and calculates interpolated values as the mean slow wave activation-time (AT) of the pair of linearly adjacent electrodes along that direction. Stairstep upsampling increased smoothness and clarity. Animation accuracy of 17 human high-resolution slow wave recordings (64-256 electrodes) was verified by visual comparison to the prior method showing a clear improvement in wave smoothness that enabled more accurate interpretation of propagation, as confirmed by an assessment of clinical applicability performed by eight GI clinicians. Quantitatively, the new method produced accurate interpolation values compared to experimental data (mean difference 0.02 ± 0.05 s) and was accurate when applied solely to dysrhythmic data (0.02 ± 0.06 s), both within the error in manual AT marking (mean 0.2 s). Mean interpolation processing time was 6.0 s per wave. These novel methods provide a validated visualization platform that will improve analysis of high-resolution GI mapping in research and clinical translation.

  14. The availability of prior ECGs improves paramedic accuracy in recognizing ST-segment elevation myocardial infarction.

    PubMed

    O'Donnell, Daniel; Mancera, Mike; Savory, Eric; Christopher, Shawn; Schaffer, Jason; Roumpf, Steve

    2015-01-01

    Early and accurate identification of ST-elevation myocardial infarction (STEMI) by prehospital providers has been shown to significantly improve door to balloon times and improve patient outcomes. Previous studies have shown that paramedic accuracy in reading 12 lead ECGs can range from 86% to 94%. However, recent studies have demonstrated that accuracy diminishes for the more uncommon STEMI presentations (e.g. lateral). Unlike hospital physicians, paramedics rarely have the ability to review previous ECGs for comparison. Whether or not a prior ECG can improve paramedic accuracy is not known. The availability of prior ECGs improves paramedic accuracy in ECG interpretation. 130 paramedics were given a single clinical scenario. Then they were randomly assigned 12 computerized prehospital ECGs, 6 with and 6 without an accompanying prior ECG. All ECGs were obtained from a local STEMI registry. For each ECG paramedics were asked to determine whether or not there was a STEMI and to rate their confidence in their interpretation. To determine if the old ECGs improved accuracy we used a mixed effects logistic regression model to calculate p-values between the control and intervention. The addition of a previous ECG improved the accuracy of identifying STEMIs from 75.5% to 80.5% (p=0.015). A previous ECG also increased paramedic confidence in their interpretation (p=0.011). The availability of previous ECGs improves paramedic accuracy and enhances their confidence in interpreting STEMIs. Further studies are needed to evaluate this impact in a clinical setting. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. On the method of least squares. II. [for calculation of covariance matrices and optimization algorithms

    NASA Technical Reports Server (NTRS)

    Jefferys, W. H.

    1981-01-01

    A least squares method proposed previously for solving a general class of problems is expanded in two ways. First, covariance matrices related to the solution are calculated and their interpretation is given. Second, improved methods of solving the normal equations related to those of Marquardt (1963) and Fletcher and Powell (1963) are developed for this approach. These methods may converge in cases where Newton's method diverges or converges slowly.

  16. Down to Earth: Contemplative Thinking Exercises for Geography Education

    ERIC Educational Resources Information Center

    de Busser, Cathelijne

    2014-01-01

    Contemporary geography education is mostly based on rational linear thinking skills, such as observation, explanation, interpretation, calculation and analysis. Even field trips--according to many the "heart" of geography--are often organized in a logical, rational manner, in which learners step-by-step improve their understanding of the…

  17. Improving Learning Object Quality: Moodle HEODAR Implementation

    ERIC Educational Resources Information Center

    Munoz, Carlos; Garcia-Penalvo, Francisco J.; Morales, Erla Mariela; Conde, Miguel Angel; Seoane, Antonio M.

    2012-01-01

    Automation toward efficiency is the aim of most intelligent systems in an educational context in which results calculation automation that allows experts to spend most of their time on important tasks, not on retrieving, ordering, and interpreting information. In this paper, the authors provide a tool that easily evaluates Learning Objects quality…

  18. Student nurses need more than maths to improve their drug calculating skills.

    PubMed

    Wright, Kerri

    2007-05-01

    Nurses need to be able to calculate accurate drug calculations in order to safely administer drugs to their patients (NMC, 2002). Studies have shown however that nurses do not always have the necessary skills to calculate accurate drug dosages and are potentially administering incorrect dosages of drugs to their patients (Hutton, M. 1998. Nursing Mathematics: the importance of application. Nursing Standard 13(11), 35-38; Kapborg, I. 1994. Calculation and administration of drug dosage by Swedish nurses, Student Nurses and Physicians. International Journal for Quality in Health Care 6(4), 389-395; O'Shea, E. 1999. Factors contributing to medication errors: a literature review. Journal of Advanced Nursing 8, 496-504; Wilson, A. 2003. Nurses maths: researching a practical approach. Nursing Standard 17(47), 33-36). The literature indicates that in order to improve drug calculations strategies need to focus on both the mathematical skills and conceptual skills of student nurses so they can interpret clinical data into drug calculations to be solved. A study was undertaken to investigate the effectiveness of implementing several strategies which focussed on developing the mathematical and conceptual skills of student nurses to improve their drug calculation skills. The study found that implementing a range of strategies which addressed these two developmental areas significantly improved the drug calculation skills of nurses. The study also indicates that a range of strategies has the potential ensuring that the skills taught are retained by the student nurses. Although the strategies significantly improved the drug calculation skills of student nurses, the fact that only 2 students were able to achieve 100% in their drug calculation test indicates a need for further research into this area.

  19. The calculation of theoretical chromospheric models and the interpretation of the solar spectrum

    NASA Technical Reports Server (NTRS)

    Avrett, Eugene H.

    1994-01-01

    Since the early 1970s we have been developing the extensive computer programs needed to construct models of the solar atmosphere and to calculate detailed spectra for use in the interpretation of solar observations. This research involves two major related efforts: work by Avrett and Loeser on the Pandora computer program for non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Kurucz on the detailed synthesis of the solar spectrum based on opacity data for over 58 million atomic and molecular lines. Our goals are to determine models of the various features observed on the sun (sunspots, different components of quiet and active regions, and flares) by means of physically realistic models, and to calculate detailed spectra at all wavelengths that match observations of those features. These two goals are interrelated: discrepancies between calculated and observed spectra are used to determine improvements in the structure of the models, and in the detailed physical processes used in both the model calculations and the spectrum calculations. The atmospheric models obtained in this way provide not only the depth variation of various atmospheric parameters, but also a description of the internal physical processes that are responsible for nonradiative heating, and for solar activity in general.

  20. The calculation of theoretical chromospheric models and the interpretation of solar spectra from rockets and spacecraft

    NASA Technical Reports Server (NTRS)

    Avrett, Eugene H.

    1993-01-01

    Since the early 1970s we have been developing the extensive computer programs needed to construct models of the solar atmosphere and to calculate detailed spectra for use in the interpretation of solar observations. This research involves two major related efforts: work by Avrett and Loeser on the Pandora computer program for non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Kurucz on the detailed synthesis of the solar spectrum based on opacity data for over 58 million atomic and molecular lines. Our goals are to determine models of the various features observed on the Sun (sunspots, different components of quiet and active regions, and flares) by means of physically realistic models, and to calculate detailed spectra at all wavelengths that match observations of those features. These two goals are interrelated: discrepancies between calculated and observed spectra are used to determine improvements in the structure of the models, and in the detailed physical processes used in both the model calculations and the spectrum calculations. The atmospheric models obtained in this way provide not only the depth variation of various atmospheric parameters, but also a description of the internal physical processes that are responsible for non-radiative heating, and for solar activity in general.

  1. Measurement and modeling of CO2 mass transfer in brine at reservoir conditions

    NASA Astrophysics Data System (ADS)

    Shi, Z.; Wen, B.; Hesse, M. A.; Tsotsis, T. T.; Jessen, K.

    2018-03-01

    In this work, we combine measurements and modeling to investigate the application of pressure-decay experiments towards delineation and interpretation of CO2 solubility, uptake and mass transfer in water/brine systems at elevated pressures of relevance to CO2 storage operations in saline aquifers. Accurate measurements and modeling of mass transfer in this context are crucial to an improved understanding of the longer-term fate of CO2 that is injected into the subsurface for storage purposes. Pressure-decay experiments are presented for CO2/water and CO2/brine systems with and without the presence of unconsolidated porous media. We demonstrate, via high-resolution numerical calculations in 2-D, that natural convection will complicate the interpretation of the experimental observations if the particle size is not sufficiently small. In such settings, we demonstrate that simple 1-D interpretations can result in an overestimation of the uptake (diffusivity) by two orders of magnitude. Furthermore, we demonstrate that high-resolution numerical calculations agree well with the experimental observations for settings where natural convection contributes substantially to the overall mass transfer process.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamedov, Bahtiyar A.; Somuncu, Elif; Askerov, Iskender M.

    In this work, a new theoretical approach is proposed for calculating fourth virial coefficient with Lennard-Jones potential. The established algorithm can be used to evaluate the thermodynamics properties and the intermolecular interaction potentials of liquids and gases with an improved accuracy. Note that the evaluation of the high-order virial coefficients is very valuable for accurate calculation of thermodynamic parameters. By using the suggested method, the fourth virial coefficient of CH{sub 4}, Ar, C{sub 2}H{sub 6} and SF{sub 6} molecules are evaluated. The calculation results are useful for accurate interpretation of the experimental data and of the determination of related physicalmore » properties.« less

  3. A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations

    NASA Technical Reports Server (NTRS)

    Ghosh, Amitabha

    1997-01-01

    This report discusses some analytical procedures to enhance the real time solutions of PMARC matrices applicable to the Wall Interference Correction Scheme (WICS) currently being implemented at the 12 foot Pressure Tunnel. WICS calculations involve solving large linear systems in a reasonably speedy manner necessitating exploring further improvement in solution time. This paper therefore presents some of the associated theory of the solution of linear systems. Then it discusses a geometrical interpretation of the residual correction schemes. Finally some results of the current investigation are presented.

  4. A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations

    NASA Technical Reports Server (NTRS)

    Ghosh, Amitabha

    1997-01-01

    This report discusses some analytical procedures to enhance the real time solutions of PMARC matrices applicable to the Wall Interference Correction Scheme (WICS) currently being implemented at the 12 foot Pressure Tunell. WICS calculations involve solving large linear systems in a reasonably speedy manner necessitating exploring further improvement in solution time. This paper therefore presents some of the associated theory of the solution of linear systems. Then it discusses a geometrical interpretation of the residual correction schemes. Finally, some results of the current investigation are presented.

  5. Modeling thermionic emission from laser-heated nanoparticles

    DOE PAGES

    Mitrani, J. M.; Shneider, M. N.; Stratton, B. C.; ...

    2016-02-01

    An adjusted form of thermionic emission is applied to calculate emitted current from laser-heated nanoparticles and to interpret time-resolved laser-induced incandescence (TR-LII) signals. This adjusted form of thermionic emission predicts significantly lower values of emitted current compared to the commonly used Richardson-Dushman equation, since the buildup of positive charge in a laser-heated nanoparticle increases the energy barrier for further emission of electrons. Thermionic emission influences the particle's energy balance equation, which can influence TR-LII signals. Additionally, reports suggest that thermionic emission can induce disintegration of nanoparticle aggregates when the electrostatic Coulomb repulsion energy between two positively charged primary particles ismore » greater than the van der Waals bond energy. Furthermore, since the presence and size of aggregates strongly influences the particle's energy balance equation, using an appropriate form of thermionic emission to calculate emitted current may improve interpretation of TR-LII signals.« less

  6. Effect of Picture Archiving and Communication System Image Manipulation on the Agreement of Chest Radiograph Interpretation in the Neonatal Intensive Care Unit.

    PubMed

    Castro, Denise A; Naqvi, Asad Ahmed; Vandenkerkhof, Elizabeth; Flavin, Michael P; Manson, David; Soboleski, Donald

    2016-01-01

    Variability in image interpretation has been attributed to differences in the interpreters' knowledge base, experience level, and access to the clinical scenario. Picture archiving and communication system (PACS) has allowed the user to manipulate the images while developing their impression of the radiograph. The aim of this study was to determine the agreement of chest radiograph (CXR) impressions among radiologists and neonatologists and help determine the effect of image manipulation with PACS on report impression. Prospective cohort study included 60 patients from the Neonatal Intensive Care Unit undergoing CXRs. Three radiologists and three neonatologists reviewed two consecutive frontal CXRs of each patient. Each physician was allowed manipulation of images as needed to provide a decision of "improved," "unchanged," or "disease progression" lung disease for each patient. Each physician repeated the process once more; this time, they were not allowed to individually manipulate the images, but an independent radiologist presets the image brightness and contrast to best optimize the CXR appearance. Percent agreement and opposing reporting views were calculated between all six physicians for each of the two methods (allowing and not allowing image manipulation). One hundred percent agreement in image impression between all six observers was only seen in 5% of cases when allowing image manipulation; 100% agreement was seen in 13% of the cases when there was no manipulation of the images. Agreement in CXR interpretation is poor; the ability to manipulate the images on PACS results in a decrease in agreement in the interpretation of these studies. New methods to standardize image appearance and allow improved comparison with previous studies should be sought to improve clinician agreement in interpretation consistency and advance patient care.

  7. Optical response of the sodium alanate system: GW0-BSE calculations and thin film measurements

    NASA Astrophysics Data System (ADS)

    van Setten, M. J.; Gremaud, R.; Brocks, G.; Dam, B.; Kresse, G.; de Wijs, G. A.

    2011-01-01

    We calculate from first principles the optical spectra of the hydrides in the sodium alanate hydrogen storage system: NaH, NaAlH4, and Na3AlH6. In particular we study the effects of systematic improvements of the theoretical description. To benchmark the calculations we also measure the optical response of a thin film of NaH. The simplest calculated dielectric functions are based upon independent electrons and holes, whose spectrum is obtained at the G0W0 level. Successive improvements consist of including partial self-consistency (so-called GW0) and account for excitonic effects, using the Bethe-Salpeter equation (BSE). Each improvement gives a sizable blue shift or red shift of the dielectric functions, but conserves the trend in the optical gap among different materials. Whereas these shifts partially cancel at the highest (GW0-BSE) level of approximation, the shape of the dielectric functions is strongly modified by excitonic effects. Calculations at the GW0-BSE level give a good agreement with the dielectric function of NaH extracted from the measurements. It demonstrates that the approach can be used for a quantitative interpretation of spectra in novel hydrogen storage materials obtained via, e.g., hydrogenography.

  8. The quest for the perfect gravity anomaly: Part 1 - New calculation standards

    USGS Publications Warehouse

    Li, X.; Hildenbrand, T.G.; Hinze, W. J.; Keller, Gordon R.; Ravat, D.; Webring, M.

    2006-01-01

    The North American gravity database together with databases from Canada, Mexico, and the United States are being revised to improve their coverage, versatility, and accuracy. An important part of this effort is revision of procedures and standards for calculating gravity anomalies taking into account our enhanced computational power, modern satellite-based positioning technology, improved terrain databases, and increased interest in more accurately defining different anomaly components. The most striking revision is the use of one single internationally accepted reference ellipsoid for the horizontal and vertical datums of gravity stations as well as for the computation of the theoretical gravity. The new standards hardly impact the interpretation of local anomalies, but do improve regional anomalies. Most importantly, such new standards can be consistently applied to gravity database compilations of nations, continents, and even the entire world. ?? 2005 Society of Exploration Geophysicists.

  9. Collisional Ionization Equilibrium for Optically Thin Plasmas

    NASA Technical Reports Server (NTRS)

    Bryans, P.; Mitthumsiri, W.; Savin, D. W.; Badnell, N. R.; Gorczyca, T. W.; Laming, J. M.

    2006-01-01

    Reliably interpreting spectra from electron-ionized cosmic plasmas requires accurate ionization balance calculations for the plasma in question. However, much of the atomic data needed for these calculations have not been generated using modern theoretical methods and their reliability are often highly suspect. We have utilized state-of-the-art calculations of dielectronic recombination (DR) rate coefficients for the hydrogenic through Na-like ions of all elements from He to Zn. We have also utilized state-of-the-art radiative recombination (RR) rate coefficient calculations for the bare through Na-like ions of all elements from H to Zn. Using our data and the recommended electron impact ionization data of Mazzotta et al. (1998), we have calculated improved collisional ionization equilibrium calculations. We compare our calculated fractional ionic abundances using these data with those presented by Mazzotta et al. (1998) for all elements from H to Ni, and with the fractional abundances derived from the modern DR and RR calculations of Gu (2003a,b, 2004) for Mg, Si, S, Ar, Ca, Fe, and Ni.

  10. Enhanced angular overlap model for nonmetallic f -electron systems

    NASA Astrophysics Data System (ADS)

    Gajek, Z.

    2005-07-01

    An efficient method of interpretation of the crystal field effect in nonmetallic f -electron systems, the enhanced angular overlap model (EAOM), is presented. The method is established on the ground of perturbation expansion of the effective Hamiltonian for localized electrons and first-principles calculations related to available experimental data. The series of actinide compounds AO2 , oxychalcogenides AOX , and dichalcogenides UX2 where X=S ,Se,Te and A=U ,Np serve as probes of the effectiveness of the proposed method. An idea is to enhance the usual angular overlap model with ab initio calculations of those contributions to the crystal field potential, which cannot be represented by the usual angular overlap model (AOM). The enhancement leads to an improved fitting and makes the approach intrinsically coherent. In addition, the ab initio calculations of the main, AOM-consistent part of the crystal field potential allows one to fix the material-specific relations for the EAOM parameters in the effective Hamiltonian. Consequently, the electronic structure interpretation based on EAOM can be extended to systems of the lowest point symmetries or/and deficient experimental data. Several examples illustrating the promising capabilities of EAOM are given.

  11. Advanced Concepts Theory Annual Report 1989

    DTIC Science & Technology

    1990-03-29

    kinetic energy to x-ray conversion and are being evaluated using nickel array implosion calculations. iv o Maxwell Laboratory aluminum array implosion...general, we need to evaluate the degree of machine PRS decoupling produced by runaway electrons, and the existence of a corona may be a relevant aspect of...the tools necessary to carry out data analysis and interpretation and (4) promote the design and evaluation of new experiments and new improved loads

  12. Exploration geophysics calculator programs for use on Hewlett-Packard models 67 and 97 programmable calculators

    USGS Publications Warehouse

    Campbell, David L.; Watts, Raymond D.

    1978-01-01

    Program listing, instructions, and example problems are given for 12 programs for the interpretation of geophysical data, for use on Hewlett-Packard models 67 and 97 programmable hand-held calculators. These are (1) gravity anomaly over 2D prism with = 9 vertices--Talwani method; (2) magnetic anomaly (?T, ?V, or ?H) over 2D prism with = 8 vertices?Talwani method; (3) total-field magnetic anomaly profile over thick sheet/thin dike; (4) single dipping seismic refractor--interpretation and design; (5) = 4 dipping seismic refractors--interpretation; (6) = 4 dipping seismic refractors?design; (7) vertical electrical sounding over = 10 horizontal layers--Schlumberger or Wenner forward calculation; (8) vertical electric sounding: Dar Zarrouk calculations; (9) magnetotelluric planewave apparent conductivity and phase angle over = 9 horizontal layers--forward calculation; (10) petrophysics: a.c. electrical parameters; (11) petrophysics: elastic constants; (12) digital convolution with = 10-1ength filter.

  13. How to report and interpret screening test properties: guidelines for driving researchers.

    PubMed

    Weaver, Bruce; Walter, Stephen D; Bédard, Michel

    2014-01-01

    One important goal of driving research is the development of a short but valid office-based screening test for fitness to drive of aging drivers. Several candidate tests have been proposed already, and no doubt others will be proposed in the future. It might seem obvious that authors advocating for the adoption of a particular screening test or procedure should report sensitivity, specificity, and other common screening test properties. Unfortunately, driving researchers have frequently failed to report any screening test properties. Others have reported screening test properties but have made basic mistakes such as calculating predictive values of positive and negative tests but reporting them incorrectly as sensitivity and specificity. These omissions and errors suggest that some driving researchers may be unaware of the importance of accurately reporting test properties when proposing a screening procedure and that others may need a refresher on how to calculate and interpret the most common screening test properties. Many good learning resources for screening and diagnostic tests are available, but most of them are intended for students and researchers in medicine, epidemiology, or public health. We hope that this tutorial in a prominent transportation journal will help lead to improved reporting and interpretation of screening test properties in articles that assess the usefulness of potential screening tools for fitness to drive.

  14. The Band Structure of Polymers: Its Calculation and Interpretation. Part 3. Interpretation.

    ERIC Educational Resources Information Center

    Duke, B. J.; O'Leary, Brian

    1988-01-01

    In this article, the third part of a series, the results of ab initio polymer calculations presented in part 2 are discussed. The electronic structure of polymers, symmetry properties of band structure, and generalizations are presented. (CW)

  15. 21 CFR 868.1900 - Diagnostic pulmonary-function interpretation calculator.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Diagnostic pulmonary-function interpretation calculator. 868.1900 Section 868.1900 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND... pulmonary-function values. (b) Classification. Class II (performance standards). ...

  16. 21 CFR 868.1900 - Diagnostic pulmonary-function interpretation calculator.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Diagnostic pulmonary-function interpretation calculator. 868.1900 Section 868.1900 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND... pulmonary-function values. (b) Classification. Class II (performance standards). ...

  17. Nonprescription medication use and literacy among New Hampshire eighth graders.

    PubMed

    Abel, Cheryl; Johnson, Kerri; Waller, Dustin; Abdalla, Maha; Goldsmith, Carroll-Ann W

    2012-01-01

    To assess whether New Hampshire (NH) eighth graders were self-medicating with over-the-counter (OTC) medications, had literacy skills necessary to safely and accurately interpret OTC medication labels, and showed improvement in OTC medication safe use and literacy skills after student pharmacist-led education. Cross-sectional repeated-measures study. NH, five separate sessions, in 2010 and 2011. 101 NH eighth grade students. Participants answered questions derived from OTC drug facts labels that assessed OTC medication safe use and literacy before and after a student pharmacist-led presentation describing each section of the labels. Participant use of OTC medications, whether participants interpreted OTC drug facts labels correctly, and whether participants were able to identify safe use of OTC medications before and after instruction about OTC drug facts labels. 57% of participants reported taking OTC medications in the previous month, 22% reported taking OTC medications autonomously, and 43% reported checking with a trusted adult before self-administration. After student pharmacist-led education, significant improvements were seen in identifying product indications, calculating adult doses, interpreting adverse effects, knowing when to call a medical provider, understanding proper medication storage, identifying expiration dates, and identifying duplicate medications in products. NH eighth graders were self-medicating with OTC medications. Significant improvements in OTC medication label literacy were seen after student pharmacist-led education. These results provide evidence of the need for, and positive effects of, OTC medication education among U.S. adolescents.

  18. Dilute Russel Viper Venom Time analysis in a Haematology Laboratory: An audit.

    PubMed

    Kruger, W; Meyer, P W A; Nel, J G

    2018-04-17

    To determine whether the current set of evaluation criteria used for dilute Russel Viper Venom Time (dRVVT) investigations in the routine laboratory meet expectation and identify possible shortcomings. All dRVVT assays requested from January 2015 to December 2015 were appraised in this cross-sectional study. The raw data panels were compared with the new reference interval, established in 2016, to determine the sequence of assays that should have been performed. The interpretive comments were audited, and false-negative reports identified. Interpretive comments according to three interpretation guidelines were compared. The reagent cost per assay was determined, and reagent cost wastage, due to redundant tests, was calculated. Only ~9% of dRVVT results authorized during 2015 had an interpretive comment included in the report. ~15% of these results were false-negative interpretations. There is a significant statistical difference in interpretive comments between the three interpretation methods. Redundant mixing tests resulted in R 7477.91 (~11%) reagent cost wastage in 2015. We managed to demonstrate very evident deficiencies in our own practice and managed to establish a standardized workflow that will potentially render our service more efficient and cost effective, aiding clinicians in making improved treatment decisions and diagnoses. Furthermore, it is essential that standard operating procedures be kept up to date and executed by all staff in the laboratory. © 2018 John Wiley & Sons Ltd.

  19. Dynamics of crystalline acetanilide: Analysis using neutron scattering and computer simulation

    NASA Astrophysics Data System (ADS)

    Hayward, R. L.; Middendorf, H. D.; Wanderlingh, U.; Smith, J. C.

    1995-04-01

    The unusual temperature dependence of several optical spectroscopic vibrational bands in crystalline acetanilide has been interpreted as providing evidence for dynamic localization. Here we examine the vibrational dynamics of crystalline acetanilide over a spectral range of ˜20-4000 cm-1 using incoherent neutron scattering experiments, phonon normal mode calculations and molecular dynamics simulations. A molecular mechanics energy function is parametrized and used to perform the normal mode analyses in the full configurational space of the crystal i.e., including the intramolecular and intermolecular degrees of freedom. One- and multiphonon incoherent inelastic neutron scattering intensities are calculated from harmonic analyses in the first Brillouin zone and compared with the experimental data presented here. Phonon dispersion relations and mean-square atomic displacements are derived from the harmonic model and compared with data derived from coherent inelastic neutron scattering and neutron and x-ray diffraction. To examine the temperature effects on the vibrations the full, anharmonic potential function is used in molecular dynamics simulations of the crystal at 80, 140, and 300 K. Several, but not all, of the spectral features calculated from the molecular dynamics simulations exhibit temperature-dependent behavior in agreement with experiment. The significance of the results for the interpretation of the optical spectroscopic results and possible improvements to the model are discussed.

  20. Guidelines for Reporting Articles on Psychiatry and Heart rate variability (GRAPH): recommendations to advance research communication

    PubMed Central

    Quintana, D S; Alvares, G A; Heathers, J A J

    2016-01-01

    The number of publications investigating heart rate variability (HRV) in psychiatry and the behavioral sciences has increased markedly in the last decade. In addition to the significant debates surrounding ideal methods to collect and interpret measures of HRV, standardized reporting of methodology in this field is lacking. Commonly cited recommendations were designed well before recent calls to improve research communication and reproducibility across disciplines. In an effort to standardize reporting, we propose the Guidelines for Reporting Articles on Psychiatry and Heart rate variability (GRAPH), a checklist with four domains: participant selection, interbeat interval collection, data preparation and HRV calculation. This paper provides an overview of these four domains and why their standardized reporting is necessary to suitably evaluate HRV research in psychiatry and related disciplines. Adherence to these communication guidelines will help expedite the translation of HRV research into a potential psychiatric biomarker by improving interpretation, reproducibility and future meta-analyses. PMID:27163204

  1. Applied statistics in ecology: common pitfalls and simple solutions

    Treesearch

    E. Ashley Steel; Maureen C. Kennedy; Patrick G. Cunningham; John S. Stanovick

    2013-01-01

    The most common statistical pitfalls in ecological research are those associated with data exploration, the logic of sampling and design, and the interpretation of statistical results. Although one can find published errors in calculations, the majority of statistical pitfalls result from incorrect logic or interpretation despite correct numerical calculations. There...

  2. Evaluation of forensic DNA mixture evidence: protocol for evaluation, interpretation, and statistical calculations using the combined probability of inclusion.

    PubMed

    Bieber, Frederick R; Buckleton, John S; Budowle, Bruce; Butler, John M; Coble, Michael D

    2016-08-31

    The evaluation and interpretation of forensic DNA mixture evidence faces greater interpretational challenges due to increasingly complex mixture evidence. Such challenges include: casework involving low quantity or degraded evidence leading to allele and locus dropout; allele sharing of contributors leading to allele stacking; and differentiation of PCR stutter artifacts from true alleles. There is variation in statistical approaches used to evaluate the strength of the evidence when inclusion of a specific known individual(s) is determined, and the approaches used must be supportable. There are concerns that methods utilized for interpretation of complex forensic DNA mixtures may not be implemented properly in some casework. Similar questions are being raised in a number of U.S. jurisdictions, leading to some confusion about mixture interpretation for current and previous casework. Key elements necessary for the interpretation and statistical evaluation of forensic DNA mixtures are described. Given the most common method for statistical evaluation of DNA mixtures in many parts of the world, including the USA, is the Combined Probability of Inclusion/Exclusion (CPI/CPE). Exposition and elucidation of this method and a protocol for use is the focus of this article. Formulae and other supporting materials are provided. Guidance and details of a DNA mixture interpretation protocol is provided for application of the CPI/CPE method in the analysis of more complex forensic DNA mixtures. This description, in turn, should help reduce the variability of interpretation with application of this methodology and thereby improve the quality of DNA mixture interpretation throughout the forensic community.

  3. Transitioning of power flow in beam models with bends

    NASA Technical Reports Server (NTRS)

    Hambric, Stephen A.

    1990-01-01

    The propagation of power flow through a dynamically loaded beam model with 90 degree bends is investigated using NASTRAN and McPOW. The transitioning of power flow types (axial, torsional, and flexural) is observed throughout the structure. To get accurate calculations of the torsional response of beams using NASTRAN, torsional inertia effects had to be added to the mass matrix calculation section of the program. Also, mass effects were included in the calculation of BAR forces to improve the continuity of power flow between elements. The importance of including all types of power flow in an analysis, rather than only flexural power, is indicated by the example. Trying to interpret power flow results that only consider flexural components in even a moderately complex problem will result in incorrect conclusions concerning the total power flow field.

  4. Mathematical modeling improves EC50 estimations from classical dose-response curves.

    PubMed

    Nyman, Elin; Lindgren, Isa; Lövfors, William; Lundengård, Karin; Cervin, Ida; Sjöström, Theresia Arbring; Altimiras, Jordi; Cedersund, Gunnar

    2015-03-01

    The β-adrenergic response is impaired in failing hearts. When studying β-adrenergic function in vitro, the half-maximal effective concentration (EC50 ) is an important measure of ligand response. We previously measured the in vitro contraction force response of chicken heart tissue to increasing concentrations of adrenaline, and observed a decreasing response at high concentrations. The classical interpretation of such data is to assume a maximal response before the decrease, and to fit a sigmoid curve to the remaining data to determine EC50 . Instead, we have applied a mathematical modeling approach to interpret the full dose-response curve in a new way. The developed model predicts a non-steady-state caused by a short resting time between increased concentrations of agonist, which affect the dose-response characterization. Therefore, an improved estimate of EC50 may be calculated using steady-state simulations of the model. The model-based estimation of EC50 is further refined using additional time-resolved data to decrease the uncertainty of the prediction. The resulting model-based EC50 (180-525 nm) is higher than the classically interpreted EC50 (46-191 nm). Mathematical modeling thus makes it possible to re-interpret previously obtained datasets, and to make accurate estimates of EC50 even when steady-state measurements are not experimentally feasible. The mathematical models described here have been submitted to the JWS Online Cellular Systems Modelling Database, and may be accessed at http://jjj.bio.vu.nl/database/nyman. © 2015 FEBS.

  5. Radiologist Agreement for Mammographic Recall by Case Difficulty and Finding Type

    PubMed Central

    Onega, Tracy; Smith, Megan; Miglioretti, Diana L.; Carney, Patricia A.; Geller, Berta; Kerlikowske, Karla; Buist, Diana SM; Rosenberg, Robert D.; Smith, Robert; Sickles, Edward A.; Haneuse, Sebastien; Anderson, Melissa L.; Yankaskas, Bonnie

    2012-01-01

    INTRODUCTIONS To assess agreement of mammography interpretations by community radiologists with consensus interpretations of an expert radiology panel, to inform approaches that improve mammography performance. METHODS From six mammography registries, 119 community-based radiologists were recruited to assess one of four randomly assigned test sets of 109 screening mammograms with comparison studies for no recall or recall, giving the most significant finding type [mass, calcifications, asymmetric density or architectural distortion] and location. The mean proportion of agreement with an expert radiology panel was calculated by cancer status, finding type, and difficulty level of identifying the finding at the woman, breast, and lesion level. We also examined concordance in finding type between study radiologists and the expert panel. For each finding type, we determined the proportion of unnecessary recalls, defined as study radiologist recalls that were not expert panel recalls. RESULTS Recall agreement was 100% for masses and for exams with obvious findings in both cancer and non-cancer cases. Among cancer cases, recall agreement was lower for lesions that were subtle (50%) or asymmetric (60%). Subtle non-cancer findings and benign calcifications showed 33% agreement for recall. Agreement for finding responsible for recall was low, especially for architectural distortions (43%) and asymmetric densities (40%). Most unnecessary recalls (51%) were asymmetric densities. CONCLUSION Agreement in mammography interpretation was low for asymmetric densities and architectural distortions. Training focused on these interpretations could improve mammography accuracy and reduce unnecessary recalls. PMID:23122345

  6. Evaluation of Quantra Hologic Volumetric Computerized Breast Density Software in Comparison With Manual Interpretation in a Diverse Population

    PubMed Central

    Richard-Davis, Gloria; Whittemore, Brianna; Disher, Anthony; Rice, Valerie Montgomery; Lenin, Rathinasamy B; Dollins, Camille; Siegel, Eric R; Eswaran, Hari

    2018-01-01

    Objective: Increased mammographic breast density is a well-established risk factor for breast cancer development, regardless of age or ethnic background. The current gold standard for categorizing breast density consists of a radiologist estimation of percent density according to the American College of Radiology (ACR) Breast Imaging Reporting and Data System (BI-RADS) criteria. This study compares paired qualitative interpretations of breast density on digital mammograms with quantitative measurement of density using Hologic’s Food and Drug Administration–approved R2 Quantra volumetric breast density assessment tool. Our goal was to find the best cutoff value of Quantra-calculated breast density for stratifying patients accurately into high-risk and low-risk breast density categories. Methods: Screening digital mammograms from 385 subjects, aged 18 to 64 years, were evaluated. These mammograms were interpreted by a radiologist using the ACR’s BI-RADS density method, and had quantitative density measured using the R2 Quantra breast density assessment tool. The appropriate cutoff for breast density–based risk stratification using Quantra software was calculated using manually determined BI-RADS scores as a gold standard, in which scores of D3/D4 denoted high-risk densities and D1/D2 denoted low-risk densities. Results: The best cutoff value for risk stratification using Quantra-calculated breast density was found to be 14.0%, yielding a sensitivity of 65%, specificity of 77%, and positive and negative predictive values of 75% and 69%, respectively. Under bootstrap analysis, the best cutoff value had a mean ± SD of 13.70% ± 0.89%. Conclusions: Our study is the first to publish on a North American population that assesses the accuracy of the R2 Quantra system at breast density stratification. Quantitative breast density measures will improve accuracy and reliability of density determination, assisting future researchers to accurately calculate breast cancer risks associated with density increase. PMID:29511356

  7. Evaluation of Quantra Hologic Volumetric Computerized Breast Density Software in Comparison With Manual Interpretation in a Diverse Population.

    PubMed

    Richard-Davis, Gloria; Whittemore, Brianna; Disher, Anthony; Rice, Valerie Montgomery; Lenin, Rathinasamy B; Dollins, Camille; Siegel, Eric R; Eswaran, Hari

    2018-01-01

    Increased mammographic breast density is a well-established risk factor for breast cancer development, regardless of age or ethnic background. The current gold standard for categorizing breast density consists of a radiologist estimation of percent density according to the American College of Radiology (ACR) Breast Imaging Reporting and Data System (BI-RADS) criteria. This study compares paired qualitative interpretations of breast density on digital mammograms with quantitative measurement of density using Hologic's Food and Drug Administration-approved R2 Quantra volumetric breast density assessment tool. Our goal was to find the best cutoff value of Quantra-calculated breast density for stratifying patients accurately into high-risk and low-risk breast density categories. Screening digital mammograms from 385 subjects, aged 18 to 64 years, were evaluated. These mammograms were interpreted by a radiologist using the ACR's BI-RADS density method, and had quantitative density measured using the R2 Quantra breast density assessment tool. The appropriate cutoff for breast density-based risk stratification using Quantra software was calculated using manually determined BI-RADS scores as a gold standard, in which scores of D3/D4 denoted high-risk densities and D1/D2 denoted low-risk densities. The best cutoff value for risk stratification using Quantra-calculated breast density was found to be 14.0%, yielding a sensitivity of 65%, specificity of 77%, and positive and negative predictive values of 75% and 69%, respectively. Under bootstrap analysis, the best cutoff value had a mean ± SD of 13.70% ± 0.89%. Our study is the first to publish on a North American population that assesses the accuracy of the R2 Quantra system at breast density stratification. Quantitative breast density measures will improve accuracy and reliability of density determination, assisting future researchers to accurately calculate breast cancer risks associated with density increase.

  8. Open source GIS based tools to improve hydrochemical water resources management in EU H2020 FREEWAT platform

    NASA Astrophysics Data System (ADS)

    Criollo, Rotman; Velasco, Violeta; Vázquez-Suñé, Enric; Nardi, Albert; Marazuela, Miguel A.; Rossetto, Rudy; Borsi, Iacopo; Foglia, Laura; Cannata, Massimiliano; De Filippis, Giovanna

    2017-04-01

    Due to the general increase of water scarcity (Steduto et al., 2012), water quantity and quality must be well known to ensure a proper access to water resources in compliance with local and regional directives. This circumstance can be supported by tools which facilitate process of data management and its analysis. Such analyses have to provide research/professionals, policy makers and users with the ability to improve the management of the water resources with standard regulatory guidelines. Compliance with the established standard regulatory guidelines (with a special focus on requirement deriving from the GWD) should have an effective monitoring, evaluation, and interpretation of a large number of physical and chemical parameters. These amounts of datasets have to be assessed and interpreted: (i) integrating data from different sources and gathered with different data access techniques and formats; (ii) managing data with varying temporal and spatial extent; (iii) integrating groundwater quality information with other relevant information such as further hydrogeological data (Velasco et al., 2014) and pre-processing these data generally for the realization of groundwater models. In this context, the Hydrochemical Analysis Tools, akvaGIS Tools, has been implemented within the H2020 FREEWAT project; which aims to manage water resources by modelling water resource management in an open source GIS platform (QGIS desktop). The main goal of AkvaGIS Tools is to improve water quality analysis through different capabilities to improve the case study conceptual model managing all data related into its geospatial database (implemented in Spatialite) and a set of tools for improving the harmonization, integration, standardization, visualization and interpretation of the hydrochemical data. To achieve that, different commands cover a wide range of methodologies for querying, interpreting, and comparing groundwater quality data and facilitate the pre-processing analysis for being used in the realization of groundwater modelling. They include, ionic balance calculations, chemical time-series analysis, correlation of chemical parameters, and calculation of various common hydrochemical diagrams (Salinity, Schöeller-Berkaloff, Piper, and Stiff), among others. Furthermore, it allows the generation of maps of the spatial distributions of parameters and diagrams and thematic maps for the parameters measured and classified in the queried area. References: Rossetto R., Borsi I., Schifani C., Bonari E., Mogorovich P., Primicerio M. (2013). SID&GRID: Integrating hydrological modeling in GIS environment. Rendiconti Online Societa Geologica Italiana, Vol. 24, 282-283 Steduto, P., Faurès, J.M., Hoogeveen, J., Winpenny, J.T., Burke, J.J. (2012). Coping with water scarcity: an action framework for agriculture and food security. ISSN 1020-1203 ; 38 Velasco, V., Tubau, I., Vázquez-Suñé, E., Gogu, R., Gaitanaru, D., Alcaraz, M., Sanchez-Vila, X. (2014). GIS-based hydrogeochemical analysis tools (QUIMET). Computers & Geosciences, 70, 164-180.

  9. Calculation of structural dynamic forces and stresses using mode acceleration

    NASA Technical Reports Server (NTRS)

    Blelloch, Paul

    1989-01-01

    While the standard mode acceleration formulation in structural dynamics has often been interpreted to suggest that the reason for improved convergence obtainable is that the dynamic correction factor is divided by the modal frequencies-squared, an alternative formulation is presented which clearly indicates that the only difference between mode acceleration and mode displacement data recovery is the addition of a static correction term. Attention is given to the advantages in numerical implementation associated with this alternative, as well as to an illustrative example.

  10. Item hierarchy-based analysis of the Rivermead Mobility Index resulted in improved interpretation and enabled faster scoring in patients undergoing rehabilitation after stroke.

    PubMed

    Roorda, Leo D; Green, John R; Houwink, Annemieke; Bagley, Pam J; Smith, Jane; Molenaar, Ivo W; Geurts, Alexander C

    2012-06-01

    To enable improved interpretation of the total score and faster scoring of the Rivermead Mobility Index (RMI) by studying item ordering or hierarchy and formulating start-and-stop rules in patients after stroke. Cohort study. Rehabilitation center in the Netherlands; stroke rehabilitation units and the community in the United Kingdom. Item hierarchy of the RMI was studied in an initial group of patients (n=620; mean age ± SD, 69.2±12.5y; 297 [48%] men; 304 [49%] left hemisphere lesion, and 269 [43%] right hemisphere lesion), and the adequacy of the item hierarchy-based start-and-stop rules was checked in a second group of patients (n=237; mean age ± SD, 60.0±11.3y; 139 [59%] men; 103 [44%] left hemisphere lesion, and 93 [39%] right hemisphere lesion) undergoing rehabilitation after stroke. Not applicable. Mokken scale analysis was used to investigate the fit of the double monotonicity model, indicating hierarchical item ordering. The percentages of patients with a difference between the RMI total score and the scores based on the start-and-stop rules were calculated to check the adequacy of these rules. The RMI had good fit of the double monotonicity model (coefficient H(T)=.87). The interpretation of the total score improved. Item hierarchy-based start-and-stop rules were formulated. The percentages of patients with a difference between the RMI total score and the score based on the recommended start-and-stop rules were 3% and 5%, respectively. Ten of the original 15 items had to be scored after applying the start-and-stop rules. Item hierarchy was established, enabling improved interpretation and faster scoring of the RMI. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  11. Limb Correction of Polar-Orbiting Imagery for the Improved Interpretation of RGB Composites

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary J.; Elmer, Nicholas

    2016-01-01

    Red-Green-Blue (RGB) composite imagery combines information from several spectral channels into one image to aid in the operational analysis of atmospheric processes. However, infrared channels are adversely affected by the limb effect, the result of an increase in optical path length of the absorbing atmosphere between the satellite and the earth as viewing zenith angle increases. This paper reviews a newly developed technique to quickly correct for limb effects in both clear and cloudy regions using latitudinally and seasonally varying limb correction coefficients for real-time applications. These limb correction coefficients account for the increase in optical path length in order to produce limb-corrected RGB composites. The improved utility of a limb-corrected Air Mass RGB composite from the application of this approach is demonstrated using Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) imagery. However, the limb correction can be applied to any polar-orbiting sensor infrared channels, provided the proper limb correction coefficients are calculated. Corrected RGB composites provide multiple advantages over uncorrected RGB composites, including increased confidence in the interpretation of RGB features, improved situational awareness for operational forecasters, and the ability to use RGB composites from multiple sensors jointly to increase the temporal frequency of observations.

  12. Application of strict criteria in adrenal venous sampling increases the proportion of missed patients with unilateral disease who benefit from surgery for primary aldosteronism.

    PubMed

    Kline, Gregory; Leung, Alexander; So, Benny; Chin, Alex; Harvey, Adrian; Pasieka, Janice L

    2018-06-01

    Adrenal vein sampling (AVS) is intended to confirm unilateral forms of primary aldosteronism, which are amenable to surgical cure. Excessively strict AVS criteria to define lateralization may result in many patients incorrectly categorized as bilateral primary aldosteronism and opportunity for surgical cure missed. Retrospective review of an AVS-primary aldosteronism database in which surgical cases are verified by standardized outcomes. Having used 'less strict' AVS criteria for lateralization, we examined the distribution of AVS lateralization indices in our confirmed unilateral primary aldosteronism cases both with and without cosyntropin stimulation. The proportion of proven unilateral cases that would have been missed with stricter AVS interpretation criteria was calculated. Particular focus was given to the proportion of missed cases according to use of international guidelines. False-positive lateralization with 'less strict' interpretation was also calculated. Of 80 surgical primary aldosteronism cases, 10-23% would have been missed with AVS lateralization indices of 3 : 1 to 5 : 1, with or without cosyntropin. If strict selectivity indices (for confirmation of catheterization) were combined with strict lateralization indices, up to 70% of unilateral primary aldosteronism cases could have been missed. Use of Endocrine Society AVS guidelines would have missed 21-43% of proven unilateral cases. 'Less strict' AVS interpretation yielded one case (1.2%) of false lateralization. Excessively strict AVS interpretation criteria will result in a high rate of missed unilateral primary aldosteronism with subsequent loss of opportunity for intervention. Use of more lenient lateralization criteria will improve the detection rate of unilateral primary aldosteronism with very low false-positive rate.

  13. Integrating concepts and skills: Slope and kinematics graphs

    NASA Astrophysics Data System (ADS)

    Tonelli, Edward P., Jr.

    The concept of force is a foundational idea in physics. To predict the results of applying forces to objects, a student must be able to interpret data representing changes in distance, time, speed, and acceleration. Comprehension of kinematics concepts requires students to interpret motion graphs, where rates of change are represented as slopes of line segments. Studies have shown that majorities of students who show proficiency with mathematical concepts fail accurately to interpret motion graphs. The primary aim of this study was to examine how students apply their knowledge of slope when interpreting kinematics graphs. To answer the research questions a mixed methods research design, which included a survey and interviews, was adopted. Ninety eight (N=98) high school students completed surveys which were quantitatively analyzed along with qualitative information collected from interviews of students (N=15) and teachers ( N=2). The study showed that students who recalled methods for calculating slopes and speeds calculated slopes accurately, but calculated speeds inaccurately. When comparing the slopes and speeds, most students resorted to calculating instead of visual inspection. Most students recalled and applied memorized rules. Students who calculated slopes and speeds inaccurately failed to recall methods of calculating slopes and speeds, but when comparing speeds, these students connected the concepts of distance and time to the line segments and the rates of change they represented. This study's findings will likely help mathematics and science educators to better assist their students to apply their knowledge of the definition of slope and skills in kinematics concepts.

  14. Hydrologic Evaluation of a Humid Climate Poplar Phytoremediation Barrier

    NASA Astrophysics Data System (ADS)

    Swensen, K.; Rabideau, A. J.

    2016-12-01

    The emplacement of hybrid poplar trees to function as phytoremediation barriers is an appealing and sustainable groundwater management strategy because of low maintenance costs and the potential to extract large amounts of groundwater without pumping. While the effectiveness of poplar barriers has been assessed by groundwater quality monitoring, less attention has been given to physical hydrologic evaluations needed to improve barrier designs. In this research, a five year hydrologic evaluation was conducted at a poplar phytoremediation site in western NY, with the goal of quantifying ETg (evapotranspiration from groundwater) as a measure of the barrier's effectiveness in a humid climate. To consider transpiration from both vadose zone and groundwater, the hydrologic evaluation included four components: physical ET measurements, theoretical ET calculations, analysis of diurnal groundwater table fluctuations, and vadose zone modeling. The direct measurements of ETT (total) were obtained using sap flow meters installed on multiple trees within the barrier. These data were interpreted using a regression model that included theoretical ET calculations and site-specific measurements of weather parameters and poplar trunk area. Application of this model was challenged by the spatial variation in rooting depth as determined by tree excavations. To further quantify the removal of groundwater by the phytobarrier (ETg), the White Method was applied to interpret diurnal groundwater fluctuations from monitoring wells located within the barrier, in conjunction with a variably saturated-saturated flow model configured to confirm water extraction from ETg. Taken together, the results of this five year hydrologic evaluation highlight the complexity in quantifying humid climate groundwater extraction, as a large number of variables were found to influence these rates. Improved understanding of these controls will contribute to improved barrier designs that maximize ETg.

  15. Interactive or static reports to guide clinical interpretation of cancer genomics.

    PubMed

    Gray, Stacy W; Gagan, Jeffrey; Cerami, Ethan; Cronin, Angel M; Uno, Hajime; Oliver, Nelly; Lowenstein, Carol; Lederman, Ruth; Revette, Anna; Suarez, Aaron; Lee, Charlotte; Bryan, Jordan; Sholl, Lynette; Van Allen, Eliezer M

    2018-05-01

    Misinterpretation of complex genomic data presents a major challenge in the implementation of precision oncology. We sought to determine whether interactive genomic reports with embedded clinician education and optimized data visualization improved genomic data interpretation. We conducted a randomized, vignette-based survey study to determine whether exposure to interactive reports for a somatic gene panel, as compared to static reports, improves physicians' genomic comprehension and report-related satisfaction (overall scores calculated across 3 vignettes, range 0-18 and 1-4, respectively, higher score corresponding with improved endpoints). One hundred and five physicians at a tertiary cancer center participated (29% participation rate): 67% medical, 20% pediatric, 7% radiation, and 7% surgical oncology; 37% female. Prior to viewing the case-based vignettes, 34% of the physicians reported difficulty making treatment recommendations based on the standard static report. After vignette/report exposure, physicians' overall comprehension scores did not differ by report type (mean score: interactive 11.6 vs static 10.5, difference = 1.1, 95% CI, -0.3, 2.5, P = .13). However, physicians exposed to the interactive report were more likely to correctly assess sequencing quality (P < .001) and understand when reports needed to be interpreted with caution (eg, low tumor purity; P = .02). Overall satisfaction scores were higher in the interactive group (mean score 2.5 vs 2.1, difference = 0.4, 95% CI, 0.2-0.7, P = .001). Interactive genomic reports may improve physicians' ability to accurately assess genomic data and increase report-related satisfaction. Additional research in users' genomic needs and efforts to integrate interactive reports into electronic health records may facilitate the implementation of precision oncology.

  16. CAN'T MISS--conquer any number task by making important statistics simple. Part 1. Types of variables, mean, median, variance, and standard deviation.

    PubMed

    Hansen, John P

    2003-01-01

    Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 1, presents basic information about data including a classification system that describes the four major types of variables: continuous quantitative variable, discrete quantitative variable, ordinal categorical variable (including the binomial variable), and nominal categorical variable. A histogram is a graph that displays the frequency distribution for a continuous variable. The article also demonstrates how to calculate the mean, median, standard deviation, and variance for a continuous variable.

  17. Short-distance matrix elements for $D$-meson mixing for 2+1 lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Chia Cheng

    2015-01-01

    We study the short-distance hadronic matrix elements for D-meson mixing with partially quenched N f = 2+1 lattice QCD. We use a large set of the MIMD Lattice Computation Collaboration's gauge configurations with a 2 tadpole-improved staggered sea quarks and tadpole-improved Lüscher-Weisz gluons. We use the a 2 tadpole-improved action for valence light quarks and the Sheikoleslami-Wohlert action with the Fermilab interpretation for the valence charm quark. Our calculation covers the complete set of five operators needed to constrain new physics models for D-meson mixing. We match our matrix elements to the MS-NDR scheme evaluated at 3 GeV. We reportmore » values for the Beneke-Buchalla-Greub-Lenz-Nierste choice of evanescent operators.« less

  18. Using base rates of low scores to interpret the ANAM4 TBI-MIL battery following mild traumatic brain injury.

    PubMed

    Ivins, Brian J; Lange, Rael T; Cole, Wesley R; Kane, Robert; Schwab, Karen A; Iverson, Grant L

    2015-02-01

    Base rates of low ANAM4 TBI-MIL scores were calculated in a convenience sample of 733 healthy male active duty soldiers using available military reference values for the following cutoffs: ≤2nd percentile (2 SDs), ≤5th percentile, <10th percentile, and <16th percentile (1 SD). Rates of low scores were also calculated in 56 active duty male soldiers who sustained an mTBI an average of 23 days (SD = 36.1) prior. 22.0% of the healthy sample and 51.8% of the mTBI sample had two or more scores below 1 SD (i.e., 16th percentile). 18.8% of the healthy sample and 44.6% of the mTBI sample had one or more scores ≤5th percentile. Rates of low scores in the healthy sample were influenced by cutoffs and race/ethnicity. Importantly, some healthy soldiers obtain at least one low score on ANAM4. These base rate analyses can improve the methodology for interpreting ANAM4 performance in clinical practice and research. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. A Study Assessing the Potential of Negative Effects in Interdisciplinary Math–Biology Instruction

    PubMed Central

    Madlung, Andreas; Bremer, Martina; Himelblau, Edward; Tullis, Alexa

    2011-01-01

    There is increasing enthusiasm for teaching approaches that combine mathematics and biology. The call for integrating more quantitative work in biology education has led to new teaching tools that improve quantitative skills. Little is known, however, about whether increasing interdisciplinary work can lead to adverse effects, such as the development of broader but shallower skills or the possibility that math anxiety causes some students to disengage in the classroom, or, paradoxically, to focus so much on the mathematics that they lose sight of its application for the biological concepts in the center of the unit at hand. We have developed and assessed an integrative learning module and found disciplinary learning gains to be equally strong in first-year students who actively engaged in embedded quantitative calculations as in those students who were merely presented with quantitative data in the context of interpreting biological and biostatistical results. When presented to advanced biology students, our quantitative learning tool increased test performance significantly. We conclude from our study that the addition of mathematical calculations to the first year and advanced biology curricula did not hinder overall student learning, and may increase disciplinary learning and data interpretation skills in advanced students. PMID:21364099

  20. The Band Structure of Polymers: Its Calculation and Interpretation. Part 2. Calculation.

    ERIC Educational Resources Information Center

    Duke, B. J.; O'Leary, Brian

    1988-01-01

    Details ab initio crystal orbital calculations using all-trans-polyethylene as a model. Describes calculations based on various forms of translational symmetry. Compares these calculations with ab initio molecular orbital calculations discussed in a preceding article. Discusses three major approximations made in the crystal case. (CW)

  1. Radiologist Agreement for Mammographic Recall by Case Difficulty and Finding Type.

    PubMed

    Onega, Tracy; Smith, Megan; Miglioretti, Diana L; Carney, Patricia A; Geller, Berta A; Kerlikowske, Karla; Buist, Diana S M; Rosenberg, Robert D; Smith, Robert A; Sickles, Edward A; Haneuse, Sebastien; Anderson, Melissa L; Yankaskas, Bonnie

    2016-11-01

    The aim of this study was to assess agreement of mammographic interpretations by community radiologists with consensus interpretations of an expert radiology panel to inform approaches that improve mammographic performance. From 6 mammographic registries, 119 community-based radiologists were recruited to assess 1 of 4 randomly assigned test sets of 109 screening mammograms with comparison studies for no recall or recall, giving the most significant finding type (mass, calcifications, asymmetric density, or architectural distortion) and location. The mean proportion of agreement with an expert radiology panel was calculated by cancer status, finding type, and difficulty level of identifying the finding at the patient, breast, and lesion level. Concordance in finding type between study radiologists and the expert panel was also examined. For each finding type, the proportion of unnecessary recalls, defined as study radiologist recalls that were not expert panel recalls, was determined. Recall agreement was 100% for masses and for examinations with obvious findings in both cancer and noncancer cases. Among cancer cases, recall agreement was lower for lesions that were subtle (50%) or asymmetric (60%). Subtle noncancer findings and benign calcifications showed 33% agreement for recall. Agreement for finding responsible for recall was low, especially for architectural distortions (43%) and asymmetric densities (40%). Most unnecessary recalls (51%) were asymmetric densities. Agreement in mammographic interpretation was low for asymmetric densities and architectural distortions. Training focused on these interpretations could improve the accuracy of mammography and reduce unnecessary recalls. Copyright © 2012 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  2. Analysis and recent advances in gamma heating measurements in MINERVE facility by using TLD and OSLD techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amharrak, H.; Di Salvo, J.; Lyoussi, A.

    2011-07-01

    The objective of this study is to develop nuclear heating measurement methods in Zero Power experimental reactors. This paper presents the analysis of Thermo-Luminescent Detector (TLD) and Optically Stimulated Luminescent Detectors (OSLD) experiments in the UO{sub 2} core of the MINERVE research reactor at the CEA Cadarache. The experimental sources of uncertainties on the gamma dose have been reduced by improving the conditions, as well as the repeatability, of the calibration step for each individual TLD. The interpretation of these measurements needs to take into account calculation of cavity correction factors, related to calibration and irradiation configurations, as well asmore » neutron corrections calculations. These calculations are based on Monte Carlo simulations of neutron-gamma and gamma-electron transport coupled particles. TLD and OSLD are positioned inside aluminum pillboxes. The comparison between calculated and measured integral gamma-ray absorbed doses using TLD, shows that calculation slightly overestimates the measurement with a C/E value equal to 1.05 {+-} 5.3 % (k = 2). By using OSLD, the calculation slightly underestimates the measurement with a C/E value equal to 0.96 {+-} 7.0% (k = 2. (authors)« less

  3. Beyond the Golden Ratio: A Calculator-Based Investigation.

    ERIC Educational Resources Information Center

    Glidden, Peter L.

    2001-01-01

    Describes computation of a continued radical to approximate the golden ratio and presents two well-known geometric interpretations of it. Uses guided-discovery to investigate different repeated radicals to see what values they approximate, the golden-rectangle interpretation of these continued radicals, and the golden-section interpretation. (KHR)

  4. Evaluation of the National Surgical Quality Improvement Program Universal Surgical Risk Calculator for a gynecologic oncology service.

    PubMed

    Szender, J Brian; Frederick, Peter J; Eng, Kevin H; Akers, Stacey N; Lele, Shashikant B; Odunsi, Kunle

    2015-03-01

    The National Surgical Quality Improvement Program is aimed at preventing perioperative complications. An online calculator was recently published, but the primary studies used limited gynecologic surgery data. The purpose of this study was to evaluate the performance of the National Surgical Quality Improvement Program Universal Surgical Risk Calculator (URC) on the patients of a gynecologic oncology service. We reviewed 628 consecutive surgeries performed by our gynecologic oncology service between July 2012 and June 2013. Demographic data including diagnosis and cancer stage, if applicable, were collected. Charts were reviewed to determine complication rates. Specific complications were as follows: death, pneumonia, cardiac complications, surgical site infection (SSI) or urinary tract infection, renal failure, or venous thromboembolic event. Data were compared with modeled outcomes using Brier scores and receiver operating characteristic curves. Significance was declared based on P < 0.05. The model accurately predicated death and venous thromboembolic event, with Brier scores of 0.004 and 0.003, respectively. Predicted risk was 50% greater than experienced for urinary tract infection; the experienced SSI and pneumonia rates were 43% and 36% greater than predicted. For any complication, the Brier score 0.023 indicates poor performance of the model. In this study of gynecologic surgeries, we could not verify the predictive value of the URC for cardiac complications, SSI, and pneumonia. One disadvantage of applying a URC to multiple subspecialties is that with some categories, complications are not accurately estimated. Our data demonstrate that some predicted risks reported by the calculator need to be interpreted with reservation.

  5. Interactive visualization to advance earthquake simulation

    USGS Publications Warehouse

    Kellogg, L.H.; Bawden, G.W.; Bernardin, T.; Billen, M.; Cowgill, E.; Hamann, B.; Jadamec, M.; Kreylos, O.; Staadt, O.; Sumner, D.

    2008-01-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. Virtual mapping tools allow virtual "field studies" in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations. ?? Birkhaueser 2008.

  6. Corrections to the geometrical interpretation of bosonization

    NASA Astrophysics Data System (ADS)

    Steiner, Manfred; Marston, Brad

    2012-02-01

    Bosonization is a powerful approach for understanding certain strongly-correlated fermion systems, especially in one spatial dimension but also in higher dimensionsootnotetextA.Houghton, H.-J. Kwon and J. B. Marston, Adv. in Phys. 49, 141 (2000).. The method may be interpreted geometrically in terms of deformations of the Fermi surface, and the quantum operator that effects the deformations may be expressed in terms of a bilinear combination of fermion creation and annihilation operators. Alternatively the deformation operator has an approximate representation in terms of coherent states of bosonic fieldsootnotetextA. H. Castro Neto and E. Fradkin, Phys. Rev. B 49, 10877 (1994).. Calculation of the inner product of deformed Fermi surfaces within the two representations reveals corrections to the bosonic picture both in one and higher spatial dimensions. We discuss the implications of the corrections for efforts to improve the usefulness of multidimensional bosonization.

  7. Mechanistic interpretation of nondestructive pavement testing deflections

    NASA Astrophysics Data System (ADS)

    Hoffman, M. S.; Thompson, M. R.

    1981-06-01

    A method for the back calculation of material properties in flexible pavements based on the interpretation of surface deflection measurements is proposed. The ILLI-PAVE, a stress-dependent finite element pavement model, was used to generate data for developing algorithms and nomographs for deflection basin interpretation. Twenty four different flexible pavement sections throughout the State of Illinois were studied. Deflections were measured and loading mode effects on pavement response were investigated. The factors controlling the pavement response to different loading modes are identified and explained. Correlations between different devices are developed. The back calculated parameters derived from the proposed evaluation procedure can be used as inputs for asphalt concrete overlay design.

  8. An Interpreted Language and System for the Visualization of Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.; Gerald-Yamasaki, Michael (Technical Monitor)

    1998-01-01

    We present an interpreted language and system supporting the visualization of unstructured meshes and the manipulation of shapes defined in terms of mesh subsets. The language features primitives inspired by geometric modeling, mathematical morphology and algebraic topology. The adaptation of the topology ideas to an interpreted environment, along with support for programming constructs such, as user function definition, provide a flexible system for analyzing a mesh and for calculating with shapes defined in terms of the mesh. We present results demonstrating some of the capabilities of the language, based on an implementation called the Shape Calculator, for tetrahedral meshes in R^3.

  9. Robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming.

    PubMed

    Baran, Richard; Northen, Trent R

    2013-10-15

    Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.

  10. Dosage calculations for nurses June L Olsen Dosage calculations for nurses et al Pearson Education £14.99 312pp 9780132068840 0132068842 [Formula: see text].

    PubMed

    2011-05-10

    A COMPREHENSIVE review of dosage calculation for nursing staff, this covers accurate calculation skills and interpretation of units of measurement in the context of safe medication-administration practice.

  11. CAN'T MISS--conquer any number task by making important statistics simple. Part 2. Probability, populations, samples, and normal distributions.

    PubMed

    Hansen, John P

    2003-01-01

    Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 2, describes probability, populations, and samples. The uses of descriptive and inferential statistics are outlined. The article also discusses the properties and probability of normal distributions, including the standard normal distribution.

  12. Lattice QCD spectroscopy for hadronic CP violation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Vries, Jordy; Mereghetti, Emanuele; Seng, Chien -Yeah

    Here, the interpretation of nuclear electric dipole moment (EDM) experiments is clouded by large theoretical uncertainties associated with nonperturbative matrix elements. In various beyond-the-Standard Model scenarios nuclear and diamagnetic atomic EDMs are expected to be dominated by CP-violating pion–nucleon interactions that arise from quark chromo-electric dipole moments. The corresponding CP-violating pion–nucleon coupling strengths are, however, poorly known. In this work we propose a strategy to calculate these couplings by using spectroscopic lattice QCD techniques. Instead of directly calculating the pion–nucleon coupling constants, a challenging task, we use chiral symmetry relations that link the pion–nucleon couplings to nucleon sigma terms andmore » mass splittings that are significantly easier to calculate. In this work, we show that these relations are reliable up to next-to-next-to-leading order in the chiral expansion in both SU(2) and SU(3) chiral perturbation theory. We conclude with a brief discussion about practical details regarding the required lattice QCD calculations and the phenomenological impact of an improved understanding of CP-violating matrix elements.« less

  13. Lattice QCD spectroscopy for hadronic CP violation

    DOE PAGES

    de Vries, Jordy; Mereghetti, Emanuele; Seng, Chien -Yeah; ...

    2017-01-16

    Here, the interpretation of nuclear electric dipole moment (EDM) experiments is clouded by large theoretical uncertainties associated with nonperturbative matrix elements. In various beyond-the-Standard Model scenarios nuclear and diamagnetic atomic EDMs are expected to be dominated by CP-violating pion–nucleon interactions that arise from quark chromo-electric dipole moments. The corresponding CP-violating pion–nucleon coupling strengths are, however, poorly known. In this work we propose a strategy to calculate these couplings by using spectroscopic lattice QCD techniques. Instead of directly calculating the pion–nucleon coupling constants, a challenging task, we use chiral symmetry relations that link the pion–nucleon couplings to nucleon sigma terms andmore » mass splittings that are significantly easier to calculate. In this work, we show that these relations are reliable up to next-to-next-to-leading order in the chiral expansion in both SU(2) and SU(3) chiral perturbation theory. We conclude with a brief discussion about practical details regarding the required lattice QCD calculations and the phenomenological impact of an improved understanding of CP-violating matrix elements.« less

  14. Enhancing causal interpretations of quality improvement interventions

    PubMed Central

    Cable, G

    2001-01-01

    In an era of chronic resource scarcity it is critical that quality improvement professionals have confidence that their project activities cause measured change. A commonly used research design, the single group pre-test/post-test design, provides little insight into whether quality improvement interventions cause measured outcomes. A re-evaluation of a quality improvement programme designed to reduce the percentage of bilateral cardiac catheterisations for the period from January 1991 to October 1996 in three catheterisation laboratories in a north eastern state in the USA was performed using an interrupted time series design with switching replications. The accuracy and causal interpretability of the findings were considerably improved compared with the original evaluation design. Moreover, the re-evaluation provided tangible evidence in support of the suggestion that more rigorous designs can and should be more widely employed to improve the causal interpretability of quality improvement efforts. Evaluation designs for quality improvement projects should be constructed to provide a reasonable opportunity, given available time and resources, for causal interpretation of the results. Evaluators of quality improvement initiatives may infrequently have access to randomised designs. Nonetheless, as shown here, other very rigorous research designs are available for improving causal interpretability. Unilateral methodological surrender need not be the only alternative to randomised experiments. Key Words: causal interpretations; quality improvement; interrupted time series design; implementation fidelity PMID:11533426

  15. Focus is key: Panic-focused interpretations are associated with symptomatic improvement in panic-focused psychodynamic psychotherapy.

    PubMed

    Keefe, John R; Solomonov, Nili; Derubeis, Robert J; Phillips, Alexander C; Busch, Fredric N; Barber, Jacques P; Chambless, Dianne L; Milrod, Barbara L

    2018-04-18

    This study examines whether, in panic-focused psychodynamic psychotherapy (PFPP), interpretations of conflicts that underlie anxiety (panic-focused or PF-interpretations) are specifically associated with subsequent panic disorder (PD) symptom improvement, over and above the provision of non-symptom-focused interpretations. Technique use in Sessions 2 and 10 of a 24-session PFPP protocol was assessed for the 65 patients with complete outcome data randomized to PFPP in a two-site trial of psychotherapies for PD. Sessions were rated in 15-min segments for therapists' use of PF-interpretations, non-PF-interpretations, and PF-clarifications. Robust regressions were conducted to examine the relationship between these interventions and symptom change subsequent to the sampled session. Interpersonal problems were examined as a moderator of the relationship of PF-interpretations to symptom change. At Session 10, but not at Session 2, patients who received a higher degree of PF-interpretations experienced greater subsequent improvement in panic symptoms. Non-PF-interpretations were not predictive. Patients with more interpersonal distress benefitted particularly from the use of PF-interpretations at Session 10. By the middle phase of PFPP, panic-focused interpretations may drive subsequent improvements in panic symptoms, especially among patients with higher interpersonal distress. Interpretations of conflict absent a panic focus may not be especially helpful.

  16. Tolerance limits and methodologies for IMRT measurement-based verification QA: Recommendations of AAPM Task Group No. 218.

    PubMed

    Miften, Moyed; Olch, Arthur; Mihailidis, Dimitris; Moran, Jean; Pawlicki, Todd; Molineu, Andrea; Li, Harold; Wijesooriya, Krishni; Shi, Jie; Xia, Ping; Papanikolaou, Nikos; Low, Daniel A

    2018-04-01

    Patient-specific IMRT QA measurements are important components of processes designed to identify discrepancies between calculated and delivered radiation doses. Discrepancy tolerance limits are neither well defined nor consistently applied across centers. The AAPM TG-218 report provides a comprehensive review aimed at improving the understanding and consistency of these processes as well as recommendations for methodologies and tolerance limits in patient-specific IMRT QA. The performance of the dose difference/distance-to-agreement (DTA) and γ dose distribution comparison metrics are investigated. Measurement methods are reviewed and followed by a discussion of the pros and cons of each. Methodologies for absolute dose verification are discussed and new IMRT QA verification tools are presented. Literature on the expected or achievable agreement between measurements and calculations for different types of planning and delivery systems are reviewed and analyzed. Tests of vendor implementations of the γ verification algorithm employing benchmark cases are presented. Operational shortcomings that can reduce the γ tool accuracy and subsequent effectiveness for IMRT QA are described. Practical considerations including spatial resolution, normalization, dose threshold, and data interpretation are discussed. Published data on IMRT QA and the clinical experience of the group members are used to develop guidelines and recommendations on tolerance and action limits for IMRT QA. Steps to check failed IMRT QA plans are outlined. Recommendations on delivery methods, data interpretation, dose normalization, the use of γ analysis routines and choice of tolerance limits for IMRT QA are made with focus on detecting differences between calculated and measured doses via the use of robust analysis methods and an in-depth understanding of IMRT verification metrics. The recommendations are intended to improve the IMRT QA process and establish consistent, and comparable IMRT QA criteria among institutions. © 2018 American Association of Physicists in Medicine.

  17. Measuring drug absorption improves interpretation of behavioral responses in a larval zebrafish locomotor assay for predicting seizure liability.

    PubMed

    Cassar, Steven; Breidenbach, Laura; Olson, Amanda; Huang, Xin; Britton, Heather; Woody, Clarissa; Sancheti, Pankajkumar; Stolarik, DeAnne; Wicke, Karsten; Hempel, Katja; LeRoy, Bruce

    2017-11-01

    Unanticipated effects on the central nervous system are a concern during new drug development. A larval zebrafish locomotor assay can reveal seizure liability of experimental molecules before testing in mammals. Relative absorption of compounds by larvae is lacking in prior reports of such assays; having those data may be valuable for interpreting seizure liability assay performance. Twenty-eight reference drugs were tested at multiple dose levels in fish water and analyzed by a blinded investigator. Responses of larval zebrafish were quantified during a 30min dosing period. Predictive metrics were calculated by comparing fish activity to mammalian seizure liability for each drug. Drug level analysis was performed to calculate concentrations in dose solutions and larvae. Fifteen drug candidates with neuronal targets, some having preclinical convulsion findings in mammals, were tested similarly. The assay has good predictive value of established mammalian responses for reference drugs. Analysis of drug absorption by larval fish revealed a positive correlation between hyperactive behavior and pro-convulsive drug absorption. False negative results were associated with significantly lower compound absorption compared to true negative, or true positive results. The predictive value for preclinical toxicology findings was inferior to that suggested by reference drugs. Disproportionately low exposures in larvae giving false negative results demonstrate that drug exposure analysis can help interpret results. Due to the rigorous testing commonly performed in preclinical toxicology, predicting convulsions in those studies may be more difficult than predicting effects from marketed drugs. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Comparison of beam position calculation methods for application in digital acquisition systems

    NASA Astrophysics Data System (ADS)

    Reiter, A.; Singh, R.

    2018-05-01

    Different approaches to the data analysis of beam position monitors in hadron accelerators are compared adopting the perspective of an analog-to-digital converter in a sampling acquisition system. Special emphasis is given to position uncertainty and robustness against bias and interference that may be encountered in an accelerator environment. In a time-domain analysis of data in the presence of statistical noise, the position calculation based on the difference-over-sum method with algorithms like signal integral or power can be interpreted as a least-squares analysis of a corresponding fit function. This link to the least-squares method is exploited in the evaluation of analysis properties and in the calculation of position uncertainty. In an analytical model and experimental evaluations the positions derived from a straight line fit or equivalently the standard deviation are found to be the most robust and to offer the least variance. The measured position uncertainty is consistent with the model prediction in our experiment, and the results of tune measurements improve significantly.

  19. Pretest expectations strongly influence interpretation of abnormal laboratory results and further management

    PubMed Central

    2010-01-01

    Background Abnormal results of diagnostic laboratory tests can be difficult to interpret when disease probability is very low. Although most physicians generally do not use Bayesian calculations to interpret abnormal results, their estimates of pretest disease probability and reasons for ordering diagnostic tests may - in a more implicit manner - influence test interpretation and further management. A better understanding of this influence may help to improve test interpretation and management. Therefore, the objective of this study was to examine the influence of physicians' pretest disease probability estimates, and their reasons for ordering diagnostic tests, on test result interpretation, posttest probability estimates and further management. Methods Prospective study among 87 primary care physicians in the Netherlands who each ordered laboratory tests for 25 patients. They recorded their reasons for ordering the tests (to exclude or confirm disease or to reassure patients) and their pretest disease probability estimates. Upon receiving the results they recorded how they interpreted the tests, their posttest probability estimates and further management. Logistic regression was used to analyse whether the pretest probability and the reasons for ordering tests influenced the interpretation, the posttest probability estimates and the decisions on further management. Results The physicians ordered tests for diagnostic purposes for 1253 patients; 742 patients had an abnormal result (64%). Physicians' pretest probability estimates and their reasons for ordering diagnostic tests influenced test interpretation, posttest probability estimates and further management. Abnormal results of tests ordered for reasons of reassurance were significantly more likely to be interpreted as normal (65.8%) compared to tests ordered to confirm a diagnosis or exclude a disease (27.7% and 50.9%, respectively). The odds for abnormal results to be interpreted as normal were much lower when the physician estimated a high pretest disease probability, compared to a low pretest probability estimate (OR = 0.18, 95% CI = 0.07-0.52, p < 0.001). Conclusions Interpretation and management of abnormal test results were strongly influenced by physicians' estimation of pretest disease probability and by the reason for ordering the test. By relating abnormal laboratory results to their pretest expectations, physicians may seek a balance between over- and under-reacting to laboratory test results. PMID:20158908

  20. Pretest expectations strongly influence interpretation of abnormal laboratory results and further management.

    PubMed

    Houben, Paul H H; van der Weijden, Trudy; Winkens, Bjorn; Winkens, Ron A G; Grol, Richard P T M

    2010-02-16

    Abnormal results of diagnostic laboratory tests can be difficult to interpret when disease probability is very low. Although most physicians generally do not use Bayesian calculations to interpret abnormal results, their estimates of pretest disease probability and reasons for ordering diagnostic tests may--in a more implicit manner--influence test interpretation and further management. A better understanding of this influence may help to improve test interpretation and management. Therefore, the objective of this study was to examine the influence of physicians' pretest disease probability estimates, and their reasons for ordering diagnostic tests, on test result interpretation, posttest probability estimates and further management. Prospective study among 87 primary care physicians in the Netherlands who each ordered laboratory tests for 25 patients. They recorded their reasons for ordering the tests (to exclude or confirm disease or to reassure patients) and their pretest disease probability estimates. Upon receiving the results they recorded how they interpreted the tests, their posttest probability estimates and further management. Logistic regression was used to analyse whether the pretest probability and the reasons for ordering tests influenced the interpretation, the posttest probability estimates and the decisions on further management. The physicians ordered tests for diagnostic purposes for 1253 patients; 742 patients had an abnormal result (64%). Physicians' pretest probability estimates and their reasons for ordering diagnostic tests influenced test interpretation, posttest probability estimates and further management. Abnormal results of tests ordered for reasons of reassurance were significantly more likely to be interpreted as normal (65.8%) compared to tests ordered to confirm a diagnosis or exclude a disease (27.7% and 50.9%, respectively). The odds for abnormal results to be interpreted as normal were much lower when the physician estimated a high pretest disease probability, compared to a low pretest probability estimate (OR = 0.18, 95% CI = 0.07-0.52, p < 0.001). Interpretation and management of abnormal test results were strongly influenced by physicians' estimation of pretest disease probability and by the reason for ordering the test. By relating abnormal laboratory results to their pretest expectations, physicians may seek a balance between over- and under-reacting to laboratory test results.

  1. Implementation and validation of an implant-based coordinate system for RSA migration calculation.

    PubMed

    Laende, Elise K; Deluzio, Kevin J; Hennigar, Allan W; Dunbar, Michael J

    2009-10-16

    An in vitro radiostereometric analysis (RSA) phantom study of a total knee replacement was carried out to evaluate the effect of implementing two new modifications to the conventional RSA procedure: (i) adding a landmark of the tibial component as an implant marker and (ii) defining an implant-based coordinate system constructed from implant landmarks for the calculation of migration results. The motivation for these two modifications were (i) to improve the representation of the implant by the markers by including the stem tip marker which increases the marker distribution (ii) to recover clinical RSA study cases with insufficient numbers of markers visible in the implant polyethylene and (iii) to eliminate errors in migration calculations due to misalignment of the anatomical axes with the RSA global coordinate system. The translational and rotational phantom studies showed no loss of accuracy with the two new measurement methods. The RSA system employing these methods has a precision of better than 0.05 mm for translations and 0.03 degrees for rotations, and an accuracy of 0.05 mm for translations and 0.15 degrees for rotations. These results indicate that the new methods to improve the interpretability, relevance, and standardization of the results do not compromise precision and accuracy, and are suitable for application to clinical data.

  2. Ionization energies of aqueous nucleic acids: photoelectron spectroscopy of pyrimidine nucleosides and ab initio calculations.

    PubMed

    Slavícek, Petr; Winter, Bernd; Faubel, Manfred; Bradforth, Stephen E; Jungwirth, Pavel

    2009-05-13

    Vertical ionization energies of the nucleosides cytidine and deoxythymidine in water, the lowest ones amounting in both cases to 8.3 eV, are obtained from photoelectron spectroscopy measurements in aqueous microjets. Ab initio calculations employing a nonequilibrium polarizable continuum model quantitatively reproduce the experimental spectra and provide molecular interpretation of the individual peaks of the photoelectron spectrum, showing also that lowest ionization originates from the base. Comparison of calculated vertical ionization potentials of pyrimidine bases, nucleosides, and nucleotides in water and in the gas phase underlines the dramatic effect of bulk hydration on the electronic structure. In the gas phase, the presence of sugar and, in particular, of phosphate has a strong effect on the energetics of ionization of the base. Upon bulk hydration, the ionization potential of the base in contrast becomes rather insensitive to the presence of the sugar and phosphate, which indicates a remarkable screening ability of the aqueous solvent. Accurate aqueous-phase vertical ionization potentials provide a significant improvement to the corrected gas-phase values used in the literature and represent important information in assessing the threshold energies for photooxidation and oxidation free energies of solvent-exposed DNA components. Likewise, such energetic data should allow improved assessment of delocalization and charge-hopping mechanisms in DNA ionized by radiation.

  3. Basic Modeling of the Solar Atmosphere and Spectrum

    NASA Technical Reports Server (NTRS)

    Avrett, Eugene H.; Wagner, William J. (Technical Monitor)

    2000-01-01

    During the last three years we have continued the development of extensive computer programs for constructing realistic models of the solar atmosphere and for calculating detailed spectra to use in the interpretation of solar observations. This research involves two major interrelated efforts: work by Avrett and Loeser on the Pandora computer program for optically thick non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Kurucz on the detailed high-resolution synthesis of the solar spectrum using data for over 58 million atomic and molecular lines. Our objective is to construct atmospheric models from which the calculated spectra agree as well as possible with high-and low-resolution observations over a wide wavelength range. Such modeling leads to an improved understanding of the physical processes responsible for the structure and behavior of the atmosphere.

  4. The cost and utilisation patterns of a pilot sign language interpreter service for primary health care services in South Africa

    PubMed Central

    Heap, Marion; Sinanovic, Edina

    2017-01-01

    Background The World Health Organisation estimates disabling hearing loss to be around 5.3%, while a study of hearing impairment and auditory pathology in Limpopo, South Africa found a prevalence of nearly 9%. Although Sign Language Interpreters (SLIs) improve the communication challenges in health care, they are unaffordable for many signing Deaf people and people with disabling hearing loss. On the other hand, there are no legal provisions in place to ensure the provision of SLIs in the health sector in most countries including South Africa. To advocate for funding of such initiatives, reliable cost estimates are essential and such data is scarce. To bridge this gap, this study estimated the costs of providing such a service within a South African District health service based on estimates obtained from a pilot-project that initiated the first South African Sign Language Interpreter (SASLI) service in health-care. Methods The ingredients method was used to calculate the unit cost per SASLI-assisted visit from a provider perspective. The unit costs per SASLI-assisted visit were then used in estimating the costs of scaling up this service to the District Health Services. The average annual SASLI utilisation rate per person was calculated on Stata v.12 using the projects’ registry from 2008–2013. Sensitivity analyses were carried out to determine the effect of changing the discount rate and personnel costs. Results Average Sign Language Interpreter services’ utilisation rates increased from 1.66 to 3.58 per person per year, with a median of 2 visits, from 2008–2013. The cost per visit was US$189.38 in 2013 whilst the estimated costs of scaling up this service ranged from US$14.2million to US$76.5million in the Cape Metropole District. These cost estimates represented 2.3%-12.2% of the budget for the Western Cape District Health Services for 2013. Conclusions In the presence of Sign Language Interpreters, Deaf Sign language users utilise health care service to a similar extent as the hearing population. However, this service requires significant capital investment by government to enable access to healthcare for the Deaf. PMID:29272272

  5. The cost and utilisation patterns of a pilot sign language interpreter service for primary health care services in South Africa.

    PubMed

    Zulu, Tryphine; Heap, Marion; Sinanovic, Edina

    2017-01-01

    The World Health Organisation estimates disabling hearing loss to be around 5.3%, while a study of hearing impairment and auditory pathology in Limpopo, South Africa found a prevalence of nearly 9%. Although Sign Language Interpreters (SLIs) improve the communication challenges in health care, they are unaffordable for many signing Deaf people and people with disabling hearing loss. On the other hand, there are no legal provisions in place to ensure the provision of SLIs in the health sector in most countries including South Africa. To advocate for funding of such initiatives, reliable cost estimates are essential and such data is scarce. To bridge this gap, this study estimated the costs of providing such a service within a South African District health service based on estimates obtained from a pilot-project that initiated the first South African Sign Language Interpreter (SASLI) service in health-care. The ingredients method was used to calculate the unit cost per SASLI-assisted visit from a provider perspective. The unit costs per SASLI-assisted visit were then used in estimating the costs of scaling up this service to the District Health Services. The average annual SASLI utilisation rate per person was calculated on Stata v.12 using the projects' registry from 2008-2013. Sensitivity analyses were carried out to determine the effect of changing the discount rate and personnel costs. Average Sign Language Interpreter services' utilisation rates increased from 1.66 to 3.58 per person per year, with a median of 2 visits, from 2008-2013. The cost per visit was US$189.38 in 2013 whilst the estimated costs of scaling up this service ranged from US$14.2million to US$76.5million in the Cape Metropole District. These cost estimates represented 2.3%-12.2% of the budget for the Western Cape District Health Services for 2013. In the presence of Sign Language Interpreters, Deaf Sign language users utilise health care service to a similar extent as the hearing population. However, this service requires significant capital investment by government to enable access to healthcare for the Deaf.

  6. The effects of variations in parameters and algorithm choices on calculated radiomics feature values: initial investigations and comparisons to feature variability across CT image acquisition conditions

    NASA Astrophysics Data System (ADS)

    Emaminejad, Nastaran; Wahi-Anwar, Muhammad; Hoffman, John; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael

    2018-02-01

    Translation of radiomics into clinical practice requires confidence in its interpretations. This may be obtained via understanding and overcoming the limitations in current radiomic approaches. Currently there is a lack of standardization in radiomic feature extraction. In this study we examined a few factors that are potential sources of inconsistency in characterizing lung nodules, such as 1)different choices of parameters and algorithms in feature calculation, 2)two CT image dose levels, 3)different CT reconstruction algorithms (WFBP, denoised WFBP, and Iterative). We investigated the effect of variation of these factors on entropy textural feature of lung nodules. CT images of 19 lung nodules identified from our lung cancer screening program were identified by a CAD tool and contours provided. The radiomics features were extracted by calculating 36 GLCM based and 4 histogram based entropy features in addition to 2 intensity based features. A robustness index was calculated across different image acquisition parameters to illustrate the reproducibility of features. Most GLCM based and all histogram based entropy features were robust across two CT image dose levels. Denoising of images slightly improved robustness of some entropy features at WFBP. Iterative reconstruction resulted in improvement of robustness in a fewer times and caused more variation in entropy feature values and their robustness. Within different choices of parameters and algorithms texture features showed a wide range of variation, as much as 75% for individual nodules. Results indicate the need for harmonization of feature calculations and identification of optimum parameters and algorithms in a radiomics study.

  7. Research on the Log Interpretation Method of Tuffaceous Sandstone Reservoirs of X Depression in Hailar-Tamtsag Basin

    NASA Astrophysics Data System (ADS)

    Liu, S.; Pan, B.

    2015-12-01

    The logging evaluation of tuffaceous sandstone reservoirs is always a difficult problem. Experiments show that the tuff and shale have different logging responses. Since the tuff content exerts an influence on the computation of shale content and the parameters of the reservoir, and the accuracy of saturation evaluation is reduced. Therefore, the effect of tuff on the calculation of saturation cannot be ignored. This study takes the tuffaceous sandstone reservoirs in the X depression of Hailar-Tamtsag basin as an example to analyze. And the electric conduction model of tuffaceous sandstone reservoirs is established. The method which combines bacterial foraging algorithm and particle swarm optimization algorithm is used to calculate the content of reservoir components in well logging for the first time, and the calculated content of tuff and shale corresponds to the results analysis of thin sections. The experiment on cation exchange capacity (CEC) proves that tuff has conductivity, and the conversion relationship between CEC and resistivity proposed by Toshinobu Iton has been improved. According to the rock electric experiment under simulated reservoir conditions, the rock-electro parameters (a, b, m and n) are determined. The improved relationship between CEC and resistivity and the rock-electro parameters are used in the calculation of saturation. Formula (1) shows the saturation equation of the tuffaceous reservoirs:According to the comparative analysis between irreducible water saturation and the calculated saturation, we find that the saturation equation used CEC data and rock-electro parameters has a better application effect at oil layer than Archie's formulas.

  8. Improvement of the photometric sunspot index and changes of the disk-integrated sunspot contrast with time

    NASA Astrophysics Data System (ADS)

    Froehlich, Claus; Pap, Judit M.; Hudson, Hugh S.

    1994-06-01

    The photometric sunspot index (PSI) was developed to study the effects of sunspots on solar irradiance. It is calculated from the sunspot data published in the Solar-Geophysical Data catalog. It has been shown that the former PSI models overestimate the effect of dark sunspots on solar irradiance; furthermore results of direct sunspot photometry indicate that the contrast of spots depends on their area. An improved PSI calculation is presented; it takes into account the area dependence of the contrast and calculates `true' daily means for each observation using the differential rotation of the spots. Moreover, the observations are screened for outliers which improves the homogeneity of the data set substantially, at least for the period after December 1981 when NOAA started to report data from a few instead of one to two stations. A detailed description of the method is provided. The correlation between the newly calculated PSI and total solar irradiance is studied for different phases of the solar cycles 21 and 22 using bi-variate spectral analysis. The results can be used as a `calibration' of PSI in terms of gain, the factor by which PSI has to be multiplied to yield the observed irradiance change. The factor changes with time from about 0.6 in 1980 to 1.1 in 1990. This unexpected result cannot be interpreted by a change of the contrast relative to the quiet Sun (as it is normally defined and determined by direct photometry) but rather as a change of the contrast between the spots and their surrounding as seen in total irradiance (integrated over the solar disk). This may partly be explained by a change in the ratio between the areas of the spots and the surrounding faculae.

  9. Improvement of the photometric sunspot index and changes of the disk-integrated sunspot contrast with time

    NASA Technical Reports Server (NTRS)

    Froehlich, Claus; Pap, Judit M.; Hudson, Hugh S.

    1994-01-01

    The photometric sunspot index (PSI) was developed to study the effects of sunspots on solar irradiance. It is calculated from the sunspot data published in the Solar-Geophysical Data catalog. It has been shown that the former PSI models overestimate the effect of dark sunspots on solar irradiance; furthermore results of direct sunspot photometry indicate that the contrast of spots depends on their area. An improved PSI calculation is presented; it takes into account the area dependence of the contrast and calculates `true' daily means for each observation using the differential rotation of the spots. Moreover, the observations are screened for outliers which improves the homogeneity of the data set substantially, at least for the period after December 1981 when NOAA started to report data from a few instead of one to two stations. A detailed description of the method is provided. The correlation between the newly calculated PSI and total solar irradiance is studied for different phases of the solar cycles 21 and 22 using bi-variate spectral analysis. The results can be used as a `calibration' of PSI in terms of gain, the factor by which PSI has to be multiplied to yield the observed irradiance change. The factor changes with time from about 0.6 in 1980 to 1.1 in 1990. This unexpected result cannot be interpreted by a change of the contrast relative to the quiet Sun (as it is normally defined and determined by direct photometry) but rather as a change of the contrast between the spots and their surrounding as seen in total irradiance (integrated over the solar disk). This may partly be explained by a change in the ratio between the areas of the spots and the surrounding faculae.

  10. Interpretation of the silver L X-ray spectrum

    NASA Technical Reports Server (NTRS)

    Chen, M. H.; Crasemann, B.; Aoyagi, M.; Mark, H.

    1977-01-01

    Silver L X-ray energies were calculated using theoretical binding energies from relaxed orbital relativistic Hartree-Fock-Slater calculations. Theoretical X-ray energies are compared with experimental results.

  11. On the Interpretation of the Influence of Substituents on the UV-Spectroscopic Properties of Benzimidazole and Indazole Derivatives (In German)

    NASA Astrophysics Data System (ADS)

    Fabian, Walter

    1983-12-01

    On the interpretation of the influence of substituents on the UV-spectroscopic properties of benzimidazole and indazole derivatives. The UV spectra of a series of substituted benzimidazoles and indazoles were calculated by means of semiempirical quantum chemical methods (PPP, CNDO/S-CI). The results of the PPP calculations were subjected to a configuration analysis. Using this method, the influence of the nature and position of substituents on the absorption characteristics could be rationalized.

  12. Energy anisotropy as a function of the direction of spin magnetization for a doublet system

    NASA Astrophysics Data System (ADS)

    Cherry, Peter J.; Malkin, Vladimir G.; Malkina, Olga L.; Asher, James R.

    2016-11-01

    This manuscript describes new phenomena that currently are not taken into account in both experimental EPR spectra interpretations and quantum chemical calculations of EPR parameters. This article presents an argument, with evidence, against the common belief that in the absence of an external magnetic field the total energy of a doublet system is independent of the spin orientation. Consequences of this phenomenon for interpretation of EPR experimental studies as well as for quantum chemical calculations of EPR parameters are discussed.

  13. 5 CFR Appendix A to Subpart J of... - Guidelines for Interpreting State Court Orders Dividing Civil Service Retirement Benefits

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... their inclusion. 3. Orders that contain a general instruction to calculate the former spouse's share effective at the time of divorce or separation will not be interpreted to prevent the inclusion of salary... waived for inclusion in CSRS annuities, will not be interpreted as dividing CSRS benefits. (Such orders...

  14. Fibre-matrix bond strength studies of glass, ceramic, and metal matrix composites

    NASA Technical Reports Server (NTRS)

    Grande, D. H.; Mandell, J. F.; Hong, K. C. C.

    1988-01-01

    An indentation test technique for compressively loading the ends of individual fibers to produce debonding has been applied to metal, glass, and glass-ceramic matrix composites; bond strength values at debond initiation are calculated using a finite-element model. Results are correlated with composite longitudinal and interlaminar shear behavior for carbon and Nicalon fiber-reinforced glasses and glass-ceramics including the effects of matrix modifications, processing conditions, and high-temperature oxidation embrittlement. The data indicate that significant bonding to improve off-axis and shear properties can be tolerated before the longitudinal behavior becomes brittle. Residual stress and other mechanical bonding effects are important, but improved analyses and multiaxial interfacial failure criteria are needed to adequately interpret bond strength data in terms of composite performance.

  15. Interactive Visualization to Advance Earthquake Simulation

    NASA Astrophysics Data System (ADS)

    Kellogg, Louise H.; Bawden, Gerald W.; Bernardin, Tony; Billen, Magali; Cowgill, Eric; Hamann, Bernd; Jadamec, Margarete; Kreylos, Oliver; Staadt, Oliver; Sumner, Dawn

    2008-04-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth’s surface and interior. Virtual mapping tools allow virtual “field studies” in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method’s strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations.

  16. An investigation of hydraulic conductivity estimation in a ground-water flow study of Northern Long Valley, New Jersey

    USGS Publications Warehouse

    Hill, Mary C.

    1985-01-01

    The purpose of this study was to develop a methodology to be used to investigate the aquifer characteristics and water supply potential of an aquifer system. In particular, the geohydrology of northern Long Valley, New Jersey, was investigated. Geohydrologic data were collected and analyzed to characterize the site. Analysis was accomplished by interpreting the available data and by using a numerical simulation of the watertable aquifer. Special attention was given to the estimation of hydraulic conductivity values and hydraulic conductivity structure which together define the hydraulic conductivity of the modeled aquifer. Hydraulic conductivity and all other aspects of the system were first estimated using the trial-and-error method of calibration. The estimation of hydraulic conductivity was improved using a least squares method to estimate hydraulic conductivity values and by improvements in the parameter structure. These efforts improved the calibration of the model far more than a preceding period of similar effort using the trial-and-error method of calibration. In addition, the proposed method provides statistical information on the reliability of estimated hydraulic conductivity values, calculated heads, and calculated flows. The methodology developed and applied in this work proved to be of substantial value in the evaluation of the aquifer considered.

  17. Improving a maximum horizontal gradient algorithm to determine geological body boundaries and fault systems based on gravity data

    NASA Astrophysics Data System (ADS)

    Van Kha, Tran; Van Vuong, Hoang; Thanh, Do Duc; Hung, Duong Quoc; Anh, Le Duc

    2018-05-01

    The maximum horizontal gradient method was first proposed by Blakely and Simpson (1986) for determining the boundaries between geological bodies with different densities. The method involves the comparison of a center point with its eight nearest neighbors in four directions within each 3 × 3 calculation grid. The horizontal location and magnitude of the maximum values are found by interpolating a second-order polynomial through the trio of points provided that the magnitude of the middle point is greater than its two nearest neighbors in one direction. In theoretical models of multiple sources, however, the above condition does not allow the maximum horizontal locations to be fully located, and it could be difficult to correlate the edges of complicated sources. In this paper, the authors propose an additional condition to identify more maximum horizontal locations within the calculation grid. This additional condition will improve the method algorithm for interpreting the boundaries of magnetic and/or gravity sources. The improved algorithm was tested on gravity models and applied to gravity data for the Phu Khanh basin on the continental shelf of the East Vietnam Sea. The results show that the additional locations of the maximum horizontal gradient could be helpful for connecting the edges of complicated source bodies.

  18. Using uterine activity to improve fetal heart rate variability analysis for detection of asphyxia during labor.

    PubMed

    Warmerdam, G J J; Vullings, R; Van Laar, J O E H; Van der Hout-Van der Jagt, M B; Bergmans, J W M; Schmitt, L; Oei, S G

    2016-03-01

    During labor, uterine contractions can cause temporary oxygen deficiency for the fetus. In case of severe and prolonged oxygen deficiency this can lead to asphyxia. The currently used technique for detection of asphyxia, cardiotocography (CTG), suffers from a low specificity. Recent studies suggest that analysis of fetal heart rate variability (HRV) in addition to CTG can provide information on fetal distress. However, interpretation of fetal HRV during labor is difficult due to the influence of uterine contractions on fetal HRV. The aim of this study is therefore to investigate whether HRV features differ during contraction and rest periods, and whether these differences can improve the detection of asphyxia. To this end, a case-control study was performed, using 14 cases with asphyxia that were matched with 14 healthy fetuses. We did not find significant differences for individual HRV features when calculated over the fetal heart rate without separating contractions and rest periods (p  >  0.30 for all HRV features). Separating contractions from rest periods did result in a significant difference. In particular the ratio between HRV features calculated during and outside contractions can improve discrimination between fetuses with and without asphyxia (p  <  0.04 for three out of four ratio HRV features that were studied in this paper).

  19. Remote sensing of snow and ice

    NASA Technical Reports Server (NTRS)

    Rango, A.

    1979-01-01

    This paper reviews remote sensing of snow and ice, techniques for improved monitoring, and incorporation of the new data into forecasting and management systems. The snowcover interpretation of visible and infrared data from satellites, automated digital methods, radiative transfer modeling to calculate the solar reflectance of snow, and models using snowcover input data and elevation zones for calculating snowmelt are discussed. The use of visible and near infrared techniques for inferring snow properties, microwave monitoring of snowpack characteristics, use of Landsat images for collecting glacier data, monitoring of river ice with visible imagery from NOAA satellites, use of sequential imagery for tracking ice flow movement, and microwave studies of sea ice are described. Applications of snow and ice research to commercial use are examined, and it is concluded that a major problem to be solved is characterization of snow and ice in nature, since assigning of the correct properties to a real system to be modeled has been difficult.

  20. Challenging Density Functional Theory Calculations with Hemes and Porphyrins.

    PubMed

    de Visser, Sam P; Stillman, Martin J

    2016-04-07

    In this paper we review recent advances in computational chemistry and specifically focus on the chemical description of heme proteins and synthetic porphyrins that act as both mimics of natural processes and technological uses. These are challenging biochemical systems involved in electron transfer as well as biocatalysis processes. In recent years computational tools have improved considerably and now can reproduce experimental spectroscopic and reactivity studies within a reasonable error margin (several kcal·mol(-1)). This paper gives recent examples from our groups, where we investigated heme and synthetic metal-porphyrin systems. The four case studies highlight how computational modelling can correctly reproduce experimental product distributions, predicted reactivity trends and guide interpretation of electronic structures of complex systems. The case studies focus on the calculations of a variety of spectroscopic features of porphyrins and show how computational modelling gives important insight that explains the experimental spectra and can lead to the design of porphyrins with tuned properties.

  1. Enhancing causal interpretations of quality improvement interventions.

    PubMed

    Cable, G

    2001-09-01

    In an era of chronic resource scarcity it is critical that quality improvement professionals have confidence that their project activities cause measured change. A commonly used research design, the single group pre-test/post-test design, provides little insight into whether quality improvement interventions cause measured outcomes. A re-evaluation of a quality improvement programme designed to reduce the percentage of bilateral cardiac catheterisations for the period from January 1991 to October 1996 in three catheterisation laboratories in a north eastern state in the USA was performed using an interrupted time series design with switching replications. The accuracy and causal interpretability of the findings were considerably improved compared with the original evaluation design. Moreover, the re-evaluation provided tangible evidence in support of the suggestion that more rigorous designs can and should be more widely employed to improve the causal interpretability of quality improvement efforts. Evaluation designs for quality improvement projects should be constructed to provide a reasonable opportunity, given available time and resources, for causal interpretation of the results. Evaluators of quality improvement initiatives may infrequently have access to randomised designs. Nonetheless, as shown here, other very rigorous research designs are available for improving causal interpretability. Unilateral methodological surrender need not be the only alternative to randomised experiments.

  2. Simulation of target interpretation based on infrared image features and psychology principle

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Chen, Yu-hua; Gao, Hong-sheng; Wang, Zhan-feng; Wang, Ji-jun; Su, Rong-hua; Huang, Yan-ping

    2009-07-01

    It's an important and complicated process in target interpretation that target features extraction and identification, which effect psychosensorial quantity of interpretation person to target infrared image directly, and decide target viability finally. Using statistical decision theory and psychology principle, designing four psychophysical experiment, the interpretation model of the infrared target is established. The model can get target detection probability by calculating four features similarity degree between target region and background region, which were plotted out on the infrared image. With the verification of a great deal target interpretation in practice, the model can simulate target interpretation and detection process effectively, get the result of target interpretation impersonality, which can provide technique support for target extraction, identification and decision-making.

  3. Medical Interpreting: Improving Communication with Your Patients.

    ERIC Educational Resources Information Center

    Tebble, Helen

    The guide is designed for physicians and other medical practitioners who need to work with medical interpreters to improve communication with patients. Special attention is given to the Australian context. An introductory section discusses the need for medical interpreters and explains the guide's organization. Subsequent sections address these…

  4. Users Manual for the NASA Lewis Ice Accretion Prediction Code (LEWICE)

    NASA Technical Reports Server (NTRS)

    Ruff, Gary A.; Berkowitz, Brian M.

    1990-01-01

    LEWICE is an ice accretion prediction code that applies a time-stepping procedure to calculate the shape of an ice accretion. The potential flow field is calculated in LEWICE using the Douglas Hess-Smith 2-D panel code (S24Y). This potential flow field is then used to calculate the trajectories of particles and the impingement points on the body. These calculations are performed to determine the distribution of liquid water impinging on the body, which then serves as input to the icing thermodynamic code. The icing thermodynamic model is based on the work of Messinger, but contains several major modifications and improvements. This model is used to calculate the ice growth rate at each point on the surface of the geometry. By specifying an icing time increment, the ice growth rate can be interpreted as an ice thickness which is added to the body, resulting in the generation of new coordinates. This procedure is repeated, beginning with the potential flow calculations, until the desired icing time is reached. The operation of LEWICE is illustrated through the use of five examples. These examples are representative of the types of applications expected for LEWICE. All input and output is discussed, along with many of the diagnostic messages contained in the code. Several error conditions that may occur in the code for certain icing conditions are identified, and a course of action is recommended. LEWICE has been used to calculate a variety of ice shapes, but should still be considered a research code. The code should be exercised further to identify any shortcomings and inadequacies. Any modifications identified as a result of these cases, or of additional experimental results, should be incorporated into the model. Using it as a test bed for improvements to the ice accretion model is one important application of LEWICE.

  5. ECG interpretation in Emergency Department residents: an update and e-learning as a resource to improve skills.

    PubMed

    Barthelemy, Francois X; Segard, Julien; Fradin, Philippe; Hourdin, Nicolas; Batard, Eric; Pottier, Pierre; Potel, Gilles; Montassier, Emmanuel

    2017-04-01

    ECG interpretation is a pivotal skill to acquire during residency, especially for Emergency Department (ED) residents. Previous studies reported that ECG interpretation competency among residents was rather low. However, the optimal resource to improve ECG interpretation skills remains unclear. The aim of our study was to compare two teaching modalities to improve the ECG interpretation skills of ED residents: e-learning and lecture-based courses. The participants were first-year and second-year ED residents, assigned randomly to the two groups. The ED residents were evaluated by means of a precourse test at the beginning of the study and a postcourse test after the e-learning and lecture-based courses. These evaluations consisted of the interpretation of 10 different ECGs. We included 39 ED residents from four different hospitals. The precourse test showed that the overall average score of ECG interpretation was 40%. Nineteen participants were then assigned to the e-learning course and 20 to the lecture-based course. Globally, there was a significant improvement in ECG interpretation skills (accuracy score=55%, P=0.0002). However, this difference was not significant between the two groups (P=0.14). Our findings showed that the ECG interpretation was not optimal and that our e-learning program may be an effective tool for enhancing ECG interpretation skills among ED residents. A large European study should be carried out to evaluate ECG interpretation skills among ED residents before the implementation of ECG learning, including e-learning strategies, during ED residency.

  6. An evolutionary view of chromatography data systems used in bioanalysis.

    PubMed

    McDowall, R D

    2010-02-01

    This is a personal view of how chromatographic peak measurement and analyte quantification for bioanalysis have evolved from the manual methods of 1970 to the electronic working possible in 2010. In four decades there have been major changes from a simple chart recorder output (that was interpreted and quantified manually) through simple automation of peak measurement, calculation of standard curves and quality control values and instrument control to the networked chromatography data systems of today that are capable of interfacing with Laboratory Information Management Systems and other IT applications. The incorporation of electronic signatures to meet regulatory requirements offers a great opportunity for business improvement and electronic working.

  7. Models for Models: An Introduction to Polymer Models Employing Simple Analogies

    NASA Astrophysics Data System (ADS)

    Tarazona, M. Pilar; Saiz, Enrique

    1998-11-01

    An introduction to the most common models used in the calculations of conformational properties of polymers, ranging from the freely jointed chain approximation to Monte Carlo or molecular dynamics methods, is presented. Mathematical formalism is avoided and simple analogies, such as human chains, gases, opinion polls, or marketing strategies, are used to explain the different models presented. A second goal of the paper is to teach students how models required for the interpretation of a system can be elaborated, starting with the simplest model and introducing successive improvements until the refinements become so sophisticated that it is much better to use an alternative approach.

  8. Model Atmospheres and Spectra for Extrasolar Giant Planets

    NASA Technical Reports Server (NTRS)

    Freedman, Richard S.; Beebe, Reta (Technical Monitor)

    2000-01-01

    In the past few years much new observational data has become available for brown dwarfs and extra solar planets. Not only are new objects being discovered but the availability of higher resolution spectra is improving. This allows a better comparison between the models and the available data, and places new constraints on the models which now have to be made more physically realistic in order to better interpret the observations. Under this grant, an array of new opacities were calculated and successfully applied to a variety of physical situations that were used as input to model available observations of brown dwarfs and extra solar giant planets.

  9. Assessing Spontaneous Combustion Instability with Recurrence Quantification Analysis

    NASA Technical Reports Server (NTRS)

    Eberhart, Chad J.; Casiano, Matthew J.

    2016-01-01

    Spontaneous instabilities can pose a significant challenge to verification of combustion stability, and characterizing its onset is an important avenue of improvement for stability assessments of liquid propellant rocket engines. Recurrence Quantification Analysis (RQA) is used here to explore nonlinear combustion dynamics that might give insight into instability. Multiple types of patterns representative of different dynamical states are identified within fluctuating chamber pressure data, and markers for impending instability are found. A class of metrics which describe these patterns is also calculated. RQA metrics are compared with and interpreted against another metric from nonlinear time series analysis, the Hurst exponent, to help better distinguish between stable and unstable operation.

  10. Soft x-ray continuum radiation transmitted through metallic filters: an analytical approach to fast electron temperature measurements.

    PubMed

    Delgado-Aparicio, L; Tritz, K; Kramer, T; Stutman, D; Finkenthal, M; Hill, K; Bitter, M

    2010-10-01

    A new set of analytic formulas describes the transmission of soft x-ray continuum radiation through a metallic foil for its application to fast electron temperature measurements in fusion plasmas. This novel approach shows good agreement with numerical calculations over a wide range of plasma temperatures in contrast with the solutions obtained when using a transmission approximated by a single-Heaviside function [S. von Goeler et al., Rev. Sci. Instrum. 70, 599 (1999)]. The new analytic formulas can improve the interpretation of the experimental results and thus contribute in obtaining fast temperature measurements in between intermittent Thomson scattering data.

  11. Quantum Mechanical Calculations of Vibrational Sum-Frequency-Generation (SFG) Spectra of Cellulose: Dependence of the CH and OH Peak Intensity on the Polarity of Cellulose Chains within the SFG Coherence Domain.

    PubMed

    Lee, Christopher M; Chen, Xing; Weiss, Philip A; Jensen, Lasse; Kim, Seong H

    2017-01-05

    Vibrational sum-frequency-generation (SFG) spectroscopy is capable of selectively detecting crystalline biopolymers interspersed in amorphous polymer matrices. However, the spectral interpretation is difficult due to the lack of knowledge on how spatial arrangements of crystalline segments influence SFG spectra features. Here we report time-dependent density functional theory (TD-DFT) calculations of cellulose crystallites in intimate contact with two different polarities: parallel versus antiparallel. TD-DFT calculations reveal that the CH/OH intensity ratio is very sensitive to the polarity of the crystallite packing. Theoretical calculations of hyperpolarizability tensors (β abc ) clearly show the dependence of SFG intensities on the polarity of crystallite packing within the SFG coherence length, which provides the basis for interpretation of the empirically observed SFG features of native cellulose in biological systems.

  12. Automated Fault Interpretation and Extraction using Improved Supplementary Seismic Datasets

    NASA Astrophysics Data System (ADS)

    Bollmann, T. A.; Shank, R.

    2017-12-01

    During the interpretation of seismic volumes, it is necessary to interpret faults along with horizons of interest. With the improvement of technology, the interpretation of faults can be expedited with the aid of different algorithms that create supplementary seismic attributes, such as semblance and coherency. These products highlight discontinuities, but still need a large amount of human interaction to interpret faults and are plagued by noise and stratigraphic discontinuities. Hale (2013) presents a method to improve on these datasets by creating what is referred to as a Fault Likelihood volume. In general, these volumes contain less noise and do not emphasize stratigraphic features. Instead, planar features within a specified strike and dip range are highlighted. Once a satisfactory Fault Likelihood Volume is created, extraction of fault surfaces is much easier. The extracted fault surfaces are then exported to interpretation software for QC. Numerous software packages have implemented this methodology with varying results. After investigating these platforms, we developed a preferred Automated Fault Interpretation workflow.

  13. A national burden of disease calculation: Dutch disability-adjusted life-years. Dutch Burden of Disease Group.

    PubMed Central

    Melse, J M; Essink-Bot, M L; Kramers, P G; Hoeymans, N

    2000-01-01

    OBJECTIVES: This study estimated the burden of disease due to 48 major causes in the Netherlands in 1994 in disability-adjusted life-years (DALYs), using national epidemiologic data and disability weights, and explored associated problems and uncertainties. METHODS: We combined data from Dutch vital statistics, registrations, and surveys with Dutch disability weights to calculate disease-specific health loss in DALYs, which are the sum of years of life lost (YLLs) and years lived with disability (YLDs) weighted for severity. RESULTS: YLLs were primarily lost by cardiovascular diseases and cancers, while YLDs were mostly lost by mental disorders and a range of chronic somatic disorders (such as chronic nonspecific lung disease and diabetes). These 4 diagnostic groups caused approximately equal numbers of DALYs. Sensitivity analysis calls for improving the accuracy of the epidemiologic data in connection with disability weights, especially for mild and frequent diseases. CONCLUSIONS: The DALY approach appeared to be feasible at a national Western European level and produced interpretable results, comparable to results from the Global Burden of Disease Study for the Established Market Economies. Suggestions for improving the methodology and its applicability are presented. PMID:10937004

  14. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    NASA Astrophysics Data System (ADS)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  15. Assessing readability formula differences with written health information materials: application, results, and recommendations.

    PubMed

    Wang, Lih-Wern; Miller, Michael J; Schmitt, Michael R; Wen, Frances K

    2013-01-01

    Readability formulas are often used to guide the development and evaluation of literacy-sensitive written health information. However, readability formula results may vary considerably as a result of differences in software processing algorithms and how each formula is applied. These variations complicate interpretations of reading grade level estimates, particularly without a uniform guideline for applying and interpreting readability formulas. This research sought to (1) identify commonly used readability formulas reported in the health care literature, (2) demonstrate the use of the most commonly used readability formulas on written health information, (3) compare and contrast the differences when applying common readability formulas to identical selections of written health information, and (4) provide recommendations for choosing an appropriate readability formula for written health-related materials to optimize their use. A literature search was conducted to identify the most commonly used readability formulas in health care literature. Each of the identified formulas was subsequently applied to word samples from 15 unique examples of written health information about the topic of depression and its treatment. Readability estimates from common readability formulas were compared based on text sample size, selection, formatting, software type, and/or hand calculations. Recommendations for their use were provided. The Flesch-Kincaid formula was most commonly used (57.42%). Readability formulas demonstrated variability up to 5 reading grade levels on the same text. The Simple Measure of Gobbledygook (SMOG) readability formula performed most consistently. Depending on the text sample size, selection, formatting, software, and/or hand calculations, the individual readability formula estimated up to 6 reading grade levels of variability. The SMOG formula appears best suited for health care applications because of its consistency of results, higher level of expected comprehension, use of more recent validation criteria for determining reading grade level estimates, and simplicity of use. To improve interpretation of readability results, reporting reading grade level estimates from any formula should be accompanied with information about word sample size, location of word sampling in the text, formatting, and method of calculation. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Computer-Interpreted Electrocardiograms: Benefits and Limitations.

    PubMed

    Schläpfer, Jürg; Wellens, Hein J

    2017-08-29

    Computerized interpretation of the electrocardiogram (CIE) was introduced to improve the correct interpretation of the electrocardiogram (ECG), facilitating health care decision making and reducing costs. Worldwide, millions of ECGs are recorded annually, with the majority automatically analyzed, followed by an immediate interpretation. Limitations in the diagnostic accuracy of CIE were soon recognized and still persist, despite ongoing improvement in ECG algorithms. Unfortunately, inexperienced physicians ordering the ECG may fail to recognize interpretation mistakes and accept the automated diagnosis without criticism. Clinical mismanagement may result, with the risk of exposing patients to useless investigations or potentially dangerous treatment. Consequently, CIE over-reading and confirmation by an experienced ECG reader are essential and are repeatedly recommended in published reports. Implementation of new ECG knowledge is also important. The current status of automated ECG interpretation is reviewed, with suggestions for improvement. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  17. Using time-delayed mutual information to discover and interpret temporal correlation structure in complex populations

    NASA Astrophysics Data System (ADS)

    Albers, D. J.; Hripcsak, George

    2012-03-01

    This paper addresses how to calculate and interpret the time-delayed mutual information (TDMI) for a complex, diversely and sparsely measured, possibly non-stationary population of time-series of unknown composition and origin. The primary vehicle used for this analysis is a comparison between the time-delayed mutual information averaged over the population and the time-delayed mutual information of an aggregated population (here, aggregation implies the population is conjoined before any statistical estimates are implemented). Through the use of information theoretic tools, a sequence of practically implementable calculations are detailed that allow for the average and aggregate time-delayed mutual information to be interpreted. Moreover, these calculations can also be used to understand the degree of homo or heterogeneity present in the population. To demonstrate that the proposed methods can be used in nearly any situation, the methods are applied and demonstrated on the time series of glucose measurements from two different subpopulations of individuals from the Columbia University Medical Center electronic health record repository, revealing a picture of the composition of the population as well as physiological features.

  18. Calculations on the orientation of the CH fragment in Co 3(CO) 9(μ 3-CH): Implications for metal surfaces

    NASA Astrophysics Data System (ADS)

    DeKock, Roger L.; Fehlner, Thomas P.

    1982-07-01

    A series of molecular orbital calculations using the Fenske-Hall method have been carried out on Co 3(CO) 9(μ 3-CH), in which the orientation of the CH fragment is varied with respect to the triangular plane of the three Co atoms. The calculations show that the energy differences between the orbitals that are predominantly CH in character are affected very little by the orientation of the CH fragment. These calculated differences are Δ(2 σ-1 σ)≅7 eV and Δ(1 π-1 σ)≅ 10.5 eV. The calculated splitting of the degenerate 1π orbitals for geometries with tilted CH fragments never amounted to more than 0.46 eV. Mixing of CH orbitals into the predominantly Co 3d manifold was extensive in all of the calculations. These calculations provide no support for the interpretation of energy loss and photoemission electron spectroscopy experiments in terms of CH fragments that are tilted with respect to the metal surface, but such an interpretation cannot be eliminated due to the diffuse nature of the spectral bands in the photoemission experiments.

  19. A randomized trial testing the efficacy of modifications to the nutrition facts table on comprehension and use of nutrition information by adolescents and young adults in Canada.

    PubMed

    Hobin, E; Sacco, J; Vanderlee, L; White, C M; Zuo, F; Sheeshka, J; McVey, G; Fodor O'Brien, M; Hammond, D

    2015-12-01

    Given the proposed changes to nutrition labelling in Canada and the dearth of research examining comprehension and use of nutrition facts tables (NFts) by adolescents and young adults, our objective was to experimentally test the efficacy of modifications to NFts on young Canadians' ability to interpret, compare and mathematically manipulate nutrition information in NFts on prepackaged food. An online survey was conducted among 2010 Canadians aged 16 to 24 years drawn from a consumer sample. Participants were randomized to view two NFts according to one of six experimental conditions, using a between-groups 2 x 3 factorial design: serving size (current NFt vs. standardized serving-sizes across similar products) x percent daily value (% DV) (current NFt vs. "low/med/high" descriptors vs. colour coding). The survey included seven performance tasks requiring participants to interpret, compare and mathematically manipulate nutrition information on NFts. Separate modified Poisson regression models were conducted for each of the three outcomes. The ability to compare two similar products was significantly enhanced in NFt conditions that included standardized serving-sizes (p ≤ .001 for all). Adding descriptors or colour coding of % DV next to calories and nutrients on NFts significantly improved participants' ability to correctly interpret % DV information (p ≤ .001 for all). Providing both standardized serving-sizes and descriptors of % DV had a modest effect on participants' ability to mathematically manipulate nutrition information to calculate the nutrient content of multiple servings of a product (relative ratio = 1.19; 95% confidence limit: 1.04-1.37). Standardizing serving-sizes and adding interpretive % DV information on NFts improved young Canadians' comprehension and use of nutrition information. Some caution should be exercised in generalizing these findings to all Canadian youth due to the sampling issues associated with the study population. Further research is needed to replicate this study in a more heterogeneous sample in Canada and across a range of food products and categories.

  20. Labtracker+, a medical smartphone app for the interpretation of consecutive laboratory results: an external validation study.

    PubMed

    Hilderink, Judith M; Rennenberg, Roger J M W; Vanmolkot, Floris H M; Bekers, Otto; Koopmans, Richard P; Meex, Steven J R

    2017-09-01

    When monitoring patients over time, clinicians may struggle to distinguish 'real changes' in consecutive blood parameters from so-called natural fluctuations. In practice, they have to do so by relying on their clinical experience and intuition. We developed Labtracker+ , a medical app that calculates the probability that an increase or decrease over time in a specific blood parameter is real, given the time between measurements. We presented patient cases to 135 participants to examine whether there is a difference between medical students, residents and experienced clinicians when it comes to interpreting changes between consecutive laboratory results. Participants were asked to interpret if changes in consecutive laboratory values were likely to be 'real' or rather due to natural fluctuations. The answers of the study participants were compared with the calculated probabilities by the app Labtracker+ and the concordance rates were assessed. Medical students (n=92), medical residents from the department of internal medicine (n=19) and internists (n=24) at a Dutch University Medical Centre. Concordance rates between the study participants and the calculated probabilities by the app Labtracker+ were compared. Besides, we tested whether physicians with clinical experience scored better concordance rates with the app Labtracker+ than inexperienced clinicians. Medical residents and internists showed significantly better concordance rates with the calculated probabilities by the app Labtracker+ than medical students, regarding their interpretation of differences between consecutive laboratory results (p=0.009 and p<0.001, respectively). The app Labtracker+ could serve as a clinical decision tool in the interpretation of consecutive laboratory test results and could contribute to rapid recognition of parameter changes by physicians. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  1. Effect-Size Measures and Meta-Analytic Thinking in Counseling Psychology Research

    ERIC Educational Resources Information Center

    Henson, Robin K.

    2006-01-01

    Effect sizes are critical to result interpretation and synthesis across studies. Although statistical significance testing has historically dominated the determination of result importance, modern views emphasize the role of effect sizes and confidence intervals. This article accessibly discusses how to calculate and interpret the effect sizes…

  2. Effects of chirp on two-dimensional Fourier transform electronic spectra.

    PubMed

    Tekavec, Patrick F; Myers, Jeffrey A; Lewis, Kristin L M; Fuller, Franklin D; Ogilvie, Jennifer P

    2010-05-24

    We examine the effect that pulse chirp has on the shape of two- dimensional electronic spectra through calculations and experiments. For the calculations we use a model two electronic level system with a solvent interaction represented by a simple Gaussian correlation function and compare the resulting spectra to experiments carried out on an organic dye molecule (Rhodamine 800). Both calculations and experiments show that distortions due to chirp are most significant when the pulses used in the experiment have different amounts of chirp, introducing peak shape asymmetry that could be interpreted as spectrally dependent relaxation. When all pulses have similar chirp the distortions are reduced but still affect the anti-diagonal symmetry of the peak shapes and introduce negative features that could be interpreted as excited state absorption.

  3. Solvent effect on the vibrational spectra of Carvedilol.

    PubMed

    Billes, Ferenc; Pataki, Hajnalka; Unsalan, Ozan; Mikosch, Hans; Vajna, Balázs; Marosi, György

    2012-09-01

    Carvedilol (CRV) is an important medicament for heart arrhythmia. The aim of this work was the interpretation of its vibrational spectra with consideration on the solvent effect. Infrared and Raman spectra were recorded in solid state as well in solution. The experimental spectra were evaluated using DFT quantum chemical calculations computing the optimized structure, atomic net charges, vibrational frequencies and force constants. The same calculations were done for the molecule in DMSO and aqueous solutions applying the PCM method. The calculated force constants were scaled to the experimentally observed solid state frequencies. The characters of the vibrational modes were determined by their potential energy distributions. Solvent effects on the molecular properties were interpreted. Based on these results vibrational spectra were simulated. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Adaptive Enhancement of X-Band Marine Radar Imagery to Detect Oil Spill Segments

    PubMed Central

    Liu, Peng; Li, Ying; Xu, Jin; Zhu, Xueyuan

    2017-01-01

    Oil spills generate a large cost in environmental and economic terms. Their identification plays an important role in oil-spill response. We propose an oil spill detection method with improved adaptive enhancement on X-band marine radar systems. The radar images used in this paper were acquired on 21 July 2010, from the teaching-training ship “YUKUN” of the Dalian Maritime University. According to the shape characteristic of co-channel interference, two convolutional filters are used to detect the location of the interference, followed by a mean filter to erase the interference. Small objects, such as bright speckles, are taken as a mask in the radar image and improved by the Fields-of-Experts model. The region marked by strong reflected signals from the sea’s surface is selected to identify oil spills. The selected region is subject to improved adaptive enhancement designed based on features of radar images. With the proposed adaptive enhancement technique, calculated oil spill detection is comparable to visual interpretation in accuracy. PMID:29036892

  5. Attribute classification for generating GPR facies models

    NASA Astrophysics Data System (ADS)

    Tronicke, Jens; Allroggen, Niklas

    2017-04-01

    Ground-penetrating radar (GPR) is an established geophysical tool to explore near-surface sedimentary environments. It has been successfully used, for example, to reconstruct past depositional environments, to investigate sedimentary processes, to aid hydrogeological investigations, and to assist in hydrocarbon reservoir analog studies. Interpreting such 2D/3D GPR data, usually relies on concepts known as GPR facies analysis, in which GPR facies are defined as units composed of characteristic reflection patterns (in terms of reflection amplitude, continuity, geometry, and internal configuration). The resulting facies models are then interpreted in terms of depositional processes, sedimentary environments, litho-, and hydrofacies. Typically, such GPR facies analyses are implemented in a manual workflow being laborious and rather inefficient especially for 3D data sets. In addition, such a subjective strategy bears the potential of inconsistency because the outcome depends on the expertise and experience of the interpreter. In this presentation, we investigate the feasibility of delineating GPR facies in an objective and largely automated manner. Our proposed workflow relies on a three-step procedure. First, we calculate a variety of geometrical and physical attributes from processed 2D and 3D GPR data sets. Then, we analyze and evaluate this attribute data base (e.g., using statistical tools such as principal component analysis) to reduce its dimensionality and to avoid redundant information, respectively. Finally, we integrate the reduced data base using tools such as composite imaging, cluster analysis, and neural networks. Using field examples that have been acquired across different depositional environments, we demonstrate that the resulting 2D/3D facies models ease and improve the interpretation of GPR data. We conclude that our interpretation strategy allows to generate GPR facies models in a consistent and largely automated manner and might be helpful in variety near-surface applications.

  6. Decision making in a multidisciplinary cancer team: does team discussion result in better quality decisions?

    PubMed

    Kee, Frank; Owen, Tracy; Leathem, Ruth

    2004-01-01

    To establish whether treatment recommendations made by clinicians concur with the best outcomes predicted from their prognostic estimates and whether team discussion improves the quality or outcome of their decision making, the authors studied real-time decision making by a lung cancer team. Clinicians completed pre- and postdiscussion questionnaires for 50 newly diagnosed patients. For each patient/doctor pairing, a decision model determined the expected patient outcomes from the clinician's prognostic estimates. The difference between the expected utility of the recommended treatment and the maximum utility derived from the clinician's predictions of the outcomes (the net utility loss) following all potential treatment modalities was calculated as an indicator of quality of the decision. The proportion of treatment decisions changed by the multidisciplinary team discussion was also calculated. Insofar as the change in net utility loss brought about by multidisciplinary team discussion was not significantly different from zero, team discussion did not improve the quality of decision making overall. However, given the modest power of the study, these findings must be interpreted with caution. In only 23 of 87 instances (26%) in which an individual specialist's initial treatment preference differed from the final group judgment did the specialist finally concur with the group treatment choice after discussion. This study does not support the theory that team discussion improves decision making by closing a knowledge gap.

  7. Understanding molecular structure from molecular mechanics.

    PubMed

    Allinger, Norman L

    2011-04-01

    Molecular mechanics gives us a well known model of molecular structure. It is less widely recognized that valence bond theory gives us structures which offer a direct interpretation of molecular mechanics formulations and parameters. The electronic effects well-known in physical organic chemistry can be directly interpreted in terms of valence bond structures, and hence quantitatively calculated and understood. The basic theory is outlined in this paper, and examples of the effects, and their interpretation in illustrative examples is presented.

  8. Radiology Workflow Dynamics: How Workflow Patterns Impact Radiologist Perceptions of Workplace Satisfaction.

    PubMed

    Lee, Matthew H; Schemmel, Andrew J; Pooler, B Dustin; Hanley, Taylor; Kennedy, Tabassum; Field, Aaron; Wiegmann, Douglas; Yu, John-Paul J

    2017-04-01

    The study aimed to assess perceptions of reading room workflow and the impact separating image-interpretive and nonimage-interpretive task workflows can have on radiologist perceptions of workplace disruptions, workload, and overall satisfaction. A 14-question survey instrument was developed to measure radiologist perceptions of workplace interruptions, satisfaction, and workload prior to and following implementation of separate image-interpretive and nonimage-interpretive reading room workflows. The results were collected over 2 weeks preceding the intervention and 2 weeks following the end of the intervention. The results were anonymized and analyzed using univariate analysis. A total of 18 people responded to the preintervention survey: 6 neuroradiology fellows and 12 attending neuroradiologists. Fifteen people who were then present for the 1-month intervention period responded to the postintervention survey. Perceptions of workplace disruptions, image interpretation, quality of trainee education, ability to perform nonimage-interpretive tasks, and quality of consultations (P < 0.0001) all improved following the intervention. Mental effort and workload also improved across all assessment domains, as did satisfaction with quality of image interpretation and consultative work. Implementation of parallel dedicated image-interpretive and nonimage-interpretive workflows may improve markers of radiologist perceptions of workplace satisfaction. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  9. SUMMARY OF PROGRESS REPORT ON NUCLEAR PHYSICS , DURING YEARLY PERIOD ENDING IN FEBRUARY 1963

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1963-10-31

    Aspects of nucleon-nucleon interactions are being examined. Tests of the mathematical form of the one-pion exchange interaction and of the validity of charge independence were improved. No real negation of the applicability of either the mathematical form of the one-pion exchange potential or of charge independence has been found. Work on p-p and p-n scattering analysis centered on improving the accuracy of data and on rewriting IBM 704 programs for the IBM 709 and 7090. The new programs provide separate treatments of errors in quantities such as differential cross sections and in relative values at the same incident energy. Improvementsmore » in the least squares method of adjustment to data were also made. Difficulties in correcting for the effects of nuclear magnetic moments were examined, and formulas and numbers giving the corrections applicable to the one-pion exchange group were worked out and incorporated in new machine programs. A partial analysis of data for n-p scattering in the 2 to 3 Bev energy region shows the existence of a sharp peak in the angular distribution of recoil protons from elastic collisions. Interpretation of work on the interplay of multipion resonance effects with each other and with the direct pion interaction was begun. Systematization of spin rotation effect and aspects of relativistic effects in the Coulomb interaction in p-p scattering was carried out. Treatment of N-N spin- orbit and central field interactions caused by vector meson exchange was improved. Programming of the IBM 709 for computation of the first order nucleonnucleus optical potential and nucleon-nucleus observables in first Born approximation at forward angles from N-N phase parameters was performed. An IBM 709 program for computing pi -N scattering observables from phase shifts was also written. Investigation of photodisintegration of deuterons involved correcting the formulas used in calculating higher order magnetic multipole effects for some omitted effects and rewriting the 704 programs for the IBM 709. Progress in a scattering matrix approach to deuteron photodisintegration was also made. In theory of nucleon-transfer reactions, systematization of the connection between completely quantum mechanical and semiclassical treatments, including analysis of approximations required in the former to obtain the latter, was attempted. Presence of interference effects between the waves was partially confirmed by experiment. Calculations of third order effects in Coulomb excitation were carried out, and programming for more extensive work was begun. Work on interpretation of heavy ion elastic scattering involved additional calculations of the interaction using nucleonnucleus scattering with some calculations using an ingoing wave boundary condition instead of an optical model treatment of the nuclear interior. Twenty-one papers published or submitted for publication during the period are listed. (D.C.W.)« less

  10. Interpreters, Interpreting, and the Study of Bilingualism.

    ERIC Educational Resources Information Center

    Valdes, Guadalupe; Angelelli, Claudia

    2003-01-01

    Discusses research on interpreting focused specifically on issues raised by this literature about the nature of bilingualism. Suggests research carried out on interpreting--while primarily produced with a professional audience in mind and concerned with improving the practice of interpreting--provides valuable insights about complex aspects of…

  11. The Mean as Balance Point

    ERIC Educational Resources Information Center

    O'Dell, Robin S.

    2012-01-01

    There are two primary interpretations of the mean: as a leveler of data (Uccellini 1996, pp. 113-114) and as a balance point of a data set. Typically, both interpretations of the mean are ignored in elementary school and middle school curricula. They are replaced with a rote emphasis on calculation using the standard algorithm. When students are…

  12. Periodicity in spatial data and geostatistical models: autocorrelation between patches

    Treesearch

    Volker C. Radeloff; Todd F. Miller; Hong S. He; David J. Mladenoff

    2000-01-01

    Several recent studies in landscape ecology have found periodicity in correlograms or semi-variograms calculated, for instance, from spatial data of soils, forests, or animal populations. Some of the studies interpreted this as an indication of regular or periodic landscape patterns. This interpretation is in disagreement with other studies that doubt whether such...

  13. Text-interpreter language for flexible generation of patient notes and instructions.

    PubMed

    Forker, T S

    1992-01-01

    An interpreted computer language has been developed along with a windowed user interface and multi-printer-support formatter to allow preparation of documentation of patient visits, including progress notes, prescriptions, excuses for work/school, outpatient laboratory requisitions, and patient instructions. Input is by trackball or mouse with little or no keyboard skill required. For clinical problems with specific protocols, the clinician can be prompted with problem-specific items of history, exam, and lab data to be gathered and documented. The language implements a number of text-related commands as well as branching logic and arithmetic commands. In addition to generating text, it is simple to implement arithmetic calculations such as weight-specific drug dosages; multiple branching decision-support protocols for paramedical personnel (or physicians); and calculation of clinical scores (e.g., coma or trauma scores) while simultaneously documenting the status of each component of the score. ASCII text files produced by the interpreter are available for computerized quality audit. Interpreter instructions are contained in text files users can customize with any text editor.

  14. Radar imaging of glaciovolcanic stratigraphy, Mount Wrangell caldera, Alaska - Interpretation model and results

    NASA Technical Reports Server (NTRS)

    Clarke, Garry K. C.; Cross, Guy M.; Benson, Carl S.

    1989-01-01

    Glaciological measurements and an airborne radar sounding survey of the glacier lying in Mount Wrangell caldera raise many questions concerning the glacier thermal regime and volcanic history of Mount Wrangell. An interpretation model has been developed that allows the depth variation of temperature, heat flux, pressure, density, ice velocity, depositional age, and thermal and dielectric properties to be calculated. Some predictions of the interpretation model are that the basal ice melting rate is 0.64 m/yr and the volcanic heat flux is 7.0 W/sq m. By using the interpretation model to calculate two-way travel time and propagation losses, radar sounding traces can be transformed to give estimates of the variation of power reflection coefficient as a function of depth and depositional age. Prominent internal reflecting zones are located at depths of approximately 59-91m, 150m, 203m, and 230m. These internal reflectors are attributed to buried horizons of acidic ice, possibly intermixed with volcanic ash, that were deposited during past eruptions of Mount Wrangell.

  15. Atrial electrogram interpretation improves after an innovative education program.

    PubMed

    Preston, Julie L; Currey, Judy; Considine, Julie

    2015-01-01

    To avoid adverse patient outcomes from inappropriate treatment, it is recommended that an atrial electrogram (AEG) be recorded whenever atrial arrhythmias develop in patients after cardiac surgery. However, AEGs are not commonly performed because nurses lack knowledge about differentiating atrial rhythms on AEGs. To investigate whether completing a novel online evidence-based education program on interpreting AEGs would improve critical care nurses' AEG interpretation. Specialized critical care nurses were taught about obtaining and interpreting atrial rhythms on AEGs using a 42-minute online mini-movie. AEG interpretation was assessed pre and two and eight weeks post-intervention. AEG interpretation increased two weeks post intervention and was retained at eight weeks. Some participants used this newly acquired knowledge to interpret arrhythmias that were not taught during the education program. Accurate interpretation of AEGs is an easy skill for specialized critical care nurses to learn via an online education program.

  16. Evaluation of airway protection: Quantitative timing measures versus penetration/aspiration score.

    PubMed

    Kendall, Katherine A

    2017-10-01

    Quantitative measures of swallowing function may improve the reliability and accuracy of modified barium swallow (MBS) study interpretation. Quantitative study analysis has not been widely instituted, however, secondary to concerns about the time required to make measures and a lack of research demonstrating impact on MBS interpretation. This study compares the accuracy of the penetration/aspiration (PEN/ASP) scale (an observational visual-perceptual assessment tool) to quantitative measures of airway closure timing relative to the arrival of the bolus at the upper esophageal sphincter in identifying a failure of airway protection during deglutition. Retrospective review of clinical swallowing data from a university-based outpatient clinic. Swallowing data from 426 patients were reviewed. Patients with normal PEN/ASP scores were identified, and the results of quantitative airway closure timing measures for three liquid bolus sizes were evaluated. The incidence of significant airway closure delay with and without a normal PEN/ASP score was determined. Inter-rater reliability for the quantitative measures was calculated. In patients with a normal PEN/ASP score, 33% demonstrated a delay in airway closure on at least one swallow during the MBS study. There was no correlation between PEN/ASP score and airway closure delay. Inter-rater reliability for the quantitative measure of airway closure timing was nearly perfect (intraclass correlation coefficient = 0.973). The use of quantitative measures of swallowing function, in conjunction with traditional visual perceptual methods of MBS study interpretation, improves the identification of airway closure delay, and hence, potential aspiration risk, even when no penetration or aspiration is apparent on the MBS study. 4. Laryngoscope, 127:2314-2318, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  17. The Productivity Dilemma in Workplace Health Promotion.

    PubMed

    Cherniack, Martin

    2015-01-01

    Worksite-based programs to improve workforce health and well-being (Workplace Health Promotion (WHP)) have been advanced as conduits for improved worker productivity and decreased health care costs. There has been a countervailing health economics contention that return on investment (ROI) does not merit preventive health investment. METHODS/PROCEDURES: Pertinent studies were reviewed and results reconsidered. A simple economic model is presented based on conventional and alternate assumptions used in cost benefit analysis (CBA), such as discounting and negative value. The issues are presented in the format of 3 conceptual dilemmas. In some occupations such as nursing, the utility of patient survival and staff health is undervalued. WHP may miss important components of work related health risk. Altering assumptions on discounting and eliminating the drag of negative value radically change the CBA value. Simple monetization of a work life and calculation of return on workforce health investment as a simple alternate opportunity involve highly selective interpretations of productivity and utility.

  18. Faddeev-chiral unitary approach to the K-d scattering length

    NASA Astrophysics Data System (ADS)

    Mizutani, T.; Fayard, C.; Saghai, B.; Tsushima, K.

    2013-03-01

    Our earlier Faddeev three-body study in the K--deuteron scattering length, AK-d, is revisited here in light of the recent developments on two fronts: (i) the improved chiral unitary approach to the theoretical description of the coupled K¯N related channels at low energies, and (ii) the new and improved measurement from SIDDHARTA Collaboration of the strong interaction energy shift and width in the lowest K--hydrogen atomic level. Those two, in combination, have allowed us to produce a reliable two-body input to the three-body calculation. All available low-energy K-p observables are well reproduced and predictions for the K¯N scattering lengths and amplitudes, (πΣ)∘ invariant-mass spectra, as well as for AK-d are put forward and compared with results from other sources. The findings of the present work are expected to be useful in interpreting the forthcoming data from CLAS, HADES, LEPS, and SIDDHARTA Collaborations.

  19. Project Echo: System Calculations

    NASA Technical Reports Server (NTRS)

    Ruthroff, Clyde L.; Jakes, William C., Jr.

    1961-01-01

    The primary experimental objective of Project Echo was the transmission of radio communications between points on the earth by reflection from the balloon satellite. This paper describes system calculations made in preparation for the experiment and their adaptation to the problem of interpreting the results. The calculations include path loss computations, expected audio signal-to-noise ratios, and received signal strength based on orbital parameters.

  20. Interactive Software For Astrodynamical Calculations

    NASA Technical Reports Server (NTRS)

    Schlaifer, Ronald S.; Skinner, David L.; Roberts, Phillip H.

    1995-01-01

    QUICK computer program provides user with facilities of sophisticated desk calculator performing scalar, vector, and matrix arithmetic; propagate conic-section orbits; determines planetary and satellite coordinates; and performs other related astrodynamic calculations within FORTRAN-like software environment. QUICK is interpreter, and no need to use compiler or linker to run QUICK code. Outputs plotted in variety of formats on variety of terminals. Written in RATFOR.

  1. Equity in Irish health care financing: measurement issues.

    PubMed

    Smith, Samantha

    2010-04-01

    This paper employs widely used analytic techniques for measuring equity in health care financing to update Irish results from previous analysis based on data from the late 1980s. Kakwani indices are calculated using household survey data from 1987/88 to 2004/05. Results indicate a marginally progressive financing system overall. However, interpretation of the results for the private sources of health financing is complicated. This problem is not unique to Ireland but it is argued that it may be relatively more important in the context of a complex health financing system, illustrated in this paper by the Irish system. Alternative options for improving the analysis of equity in health care financing are discussed.

  2. Distance geometry protocol to generate conformations of natural products to structurally interpret ion mobility-mass spectrometry collision cross sections.

    PubMed

    Stow, Sarah M; Goodwin, Cody R; Kliman, Michal; Bachmann, Brian O; McLean, John A; Lybrand, Terry P

    2014-12-04

    Ion mobility-mass spectrometry (IM-MS) allows the separation of ionized molecules based on their charge-to-surface area (IM) and mass-to-charge ratio (MS), respectively. The IM drift time data that is obtained is used to calculate the ion-neutral collision cross section (CCS) of the ionized molecule with the neutral drift gas, which is directly related to the ion conformation and hence molecular size and shape. Studying the conformational landscape of these ionized molecules computationally provides interpretation to delineate the potential structures that these CCS values could represent, or conversely, structural motifs not consistent with the IM data. A challenge in the IM-MS community is the ability to rapidly compute conformations to interpret natural product data, a class of molecules exhibiting a broad range of biological activity. The diversity of biological activity is, in part, related to the unique structural characteristics often observed for natural products. Contemporary approaches to structurally interpret IM-MS data for peptides and proteins typically utilize molecular dynamics (MD) simulations to sample conformational space. However, MD calculations are computationally expensive, they require a force field that accurately describes the molecule of interest, and there is no simple metric that indicates when sufficient conformational sampling has been achieved. Distance geometry is a computationally inexpensive approach that creates conformations based on sampling different pairwise distances between the atoms within the molecule and therefore does not require a force field. Progressively larger distance bounds can be used in distance geometry calculations, providing in principle a strategy to assess when all plausible conformations have been sampled. Our results suggest that distance geometry is a computationally efficient and potentially superior strategy for conformational analysis of natural products to interpret gas-phase CCS data.

  3. Distance Geometry Protocol to Generate Conformations of Natural Products to Structurally Interpret Ion Mobility-Mass Spectrometry Collision Cross Sections

    PubMed Central

    2015-01-01

    Ion mobility-mass spectrometry (IM-MS) allows the separation of ionized molecules based on their charge-to-surface area (IM) and mass-to-charge ratio (MS), respectively. The IM drift time data that is obtained is used to calculate the ion-neutral collision cross section (CCS) of the ionized molecule with the neutral drift gas, which is directly related to the ion conformation and hence molecular size and shape. Studying the conformational landscape of these ionized molecules computationally provides interpretation to delineate the potential structures that these CCS values could represent, or conversely, structural motifs not consistent with the IM data. A challenge in the IM-MS community is the ability to rapidly compute conformations to interpret natural product data, a class of molecules exhibiting a broad range of biological activity. The diversity of biological activity is, in part, related to the unique structural characteristics often observed for natural products. Contemporary approaches to structurally interpret IM-MS data for peptides and proteins typically utilize molecular dynamics (MD) simulations to sample conformational space. However, MD calculations are computationally expensive, they require a force field that accurately describes the molecule of interest, and there is no simple metric that indicates when sufficient conformational sampling has been achieved. Distance geometry is a computationally inexpensive approach that creates conformations based on sampling different pairwise distances between the atoms within the molecule and therefore does not require a force field. Progressively larger distance bounds can be used in distance geometry calculations, providing in principle a strategy to assess when all plausible conformations have been sampled. Our results suggest that distance geometry is a computationally efficient and potentially superior strategy for conformational analysis of natural products to interpret gas-phase CCS data. PMID:25360896

  4. The efficacy of a novel mobile phone application for goldmann ptosis visual field interpretation.

    PubMed

    Maamari, Robi N; D'Ambrosio, Michael V; Joseph, Jeffrey M; Tao, Jeremiah P

    2014-01-01

    To evaluate the efficacy of a novel mobile phone application that calculates superior visual field defects on Goldmann visual field charts. Experimental study in which the mobile phone application and 14 oculoplastic surgeons interpreted the superior visual field defect in 10 Goldmann charts. Percent error of the mobile phone application and the oculoplastic surgeons' estimates were calculated compared with computer software computation of the actual defects. Precision and time efficiency of the application were evaluated by processing the same Goldmann visual field chart 10 repeated times. The mobile phone application was associated with a mean percent error of 1.98% (95% confidence interval[CI], 0.87%-3.10%) in superior visual field defect calculation. The average mean percent error of the oculoplastic surgeons' visual estimates was 19.75% (95% CI, 14.39%-25.11%). Oculoplastic surgeons, on average, underestimated the defect in all 10 Goldmann charts. There was high interobserver variance among oculoplastic surgeons. The percent error of the 10 repeated measurements on a single chart was 0.93% (95% CI, 0.40%-1.46%). The average time to process 1 chart was 12.9 seconds (95% CI, 10.9-15.0 seconds). The mobile phone application was highly accurate, precise, and time-efficient in calculating the percent superior visual field defect using Goldmann charts. Oculoplastic surgeon visual interpretations were highly inaccurate, highly variable, and usually underestimated the field vision loss.

  5. Treating Metaphor Interpretation Deficits Subsequent to Right Hemisphere Brain Damage: Preliminary Results

    PubMed Central

    Lundgren, Kristine; Brownell, Hiram; Cayer-Meade, Carol; Milione, Janet; Kearns, Kevin

    2012-01-01

    Purpose This investigation sought to determine whether a structured intervention focused on improving use of semantic associations could improve patients’ ability to provide oral interpretations of metaphors following Right Hemisphere Damage (RHD). Methods Principles of single subject experimental design provided the basis for the study. Five patients received either 10 or 20 baseline assessments of oral metaphor interpretation and, as a control, assessments of line orientation skill. They then received approximately 10 one-hour sessions of structured intervention to improve oral metaphor interpretation followed by post-training assessments and a 3 month follow up. Results Patients’ performances revealed evidence of good response to training as shown by patients' ability to reach criterion on all intervention tasks and by their significant improvement on oral metaphor interpretation. There was relatively little improvement on the line orientation task. Discussion The results of this study support the clinical usefulness of this new approach to treating communication deficits associated with RHD due to stroke, even years post-onset. There are, however, questions that remain unanswered. For example, additional data will be needed to gauge how a patient’s severity of impairment relates to the potential for improvement, to chart the durability and scope of improvement associated with the training, and to determine the type of visuospatial ability needed for using this type of pictorial material. PMID:22837588

  6. Stable Computation of the Vertical Gradient of Potential Field Data Based on Incorporating the Smoothing Filters

    NASA Astrophysics Data System (ADS)

    Baniamerian, Jamaledin; Liu, Shuang; Abbas, Mahmoud Ahmed

    2018-04-01

    The vertical gradient is an essential tool in interpretation algorithms. It is also the primary enhancement technique to improve the resolution of measured gravity and magnetic field data, since it has higher sensitivity to changes in physical properties (density or susceptibility) of the subsurface structures than the measured field. If the field derivatives are not directly measured with the gradiometers, they can be calculated from the collected gravity or magnetic data using numerical methods such as those based on fast Fourier transform technique. The gradients behave similar to high-pass filters and enhance the short-wavelength anomalies which may be associated with either small-shallow sources or high-frequency noise content in data, and their numerical computation is susceptible to suffer from amplification of noise. This behaviour can adversely affect the stability of the derivatives in the presence of even a small level of the noise and consequently limit their application to interpretation methods. Adding a smoothing term to the conventional formulation of calculating the vertical gradient in Fourier domain can improve the stability of numerical differentiation of the field. In this paper, we propose a strategy in which the overall efficiency of the classical algorithm in Fourier domain is improved by incorporating two different smoothing filters. For smoothing term, a simple qualitative procedure based on the upward continuation of the field to a higher altitude is introduced to estimate the related parameters which are called regularization parameter and cut-off wavenumber in the corresponding filters. The efficiency of these new approaches is validated by computing the first- and second-order derivatives of noise-corrupted synthetic data sets and then comparing the results with the true ones. The filtered and unfiltered vertical gradients are incorporated into the extended Euler deconvolution to estimate the depth and structural index of a magnetic sphere, hence, quantitatively evaluating the methods. In the real case, the described algorithms are used to enhance a portion of aeromagnetic data acquired in Mackenzie Corridor, Northern Mainland, Canada.

  7. Acoustic resonances of fluid-immersed elastic cylinders and spheroids: Theory and experiment

    NASA Astrophysics Data System (ADS)

    Niemiec, Jan; Überall, Herbert; Bao, X. L.

    2002-05-01

    Frequency resonances in the scattering of acoustic waves from a target object are caused by the phase matching of surface waves repeatedly encircling the object. This is exemplified here by considering elastic finite cylinders and spheroids, and the phase-matching condition provides a means of calculating the complex resonance frequencies of such objects. Tank experiments carried out at Catholic University, or at the University of Le Havre, France by G. Maze and J. Ripoche, have been interpreted using this approach. The experiments employed sound pulses to measure arrival times, which allowed identification of the surface paths taken by the surface waves, thus giving rise to resonances in the scattering amplitude. A calculation of the resonance frequencies using the T-matrix approach showed satisfactory agreement with the experimental resonance frequencies that were either measured directly (as at Le Havre), or that were obtained by the interpretation of measured arrival times (at Catholic University) using calculated surface wave paths, and the extraction of resonance frequencies therefrom, on the basis of the phase-matching condition. Results for hemispherically endcapped, evacuated steel cylinders obtained in a lake experiment carried out by the NSWC were interpreted in the same fashion.

  8. Challenging Density Functional Theory Calculations with Hemes and Porphyrins

    PubMed Central

    de Visser, Sam P.; Stillman, Martin J.

    2016-01-01

    In this paper we review recent advances in computational chemistry and specifically focus on the chemical description of heme proteins and synthetic porphyrins that act as both mimics of natural processes and technological uses. These are challenging biochemical systems involved in electron transfer as well as biocatalysis processes. In recent years computational tools have improved considerably and now can reproduce experimental spectroscopic and reactivity studies within a reasonable error margin (several kcal·mol−1). This paper gives recent examples from our groups, where we investigated heme and synthetic metal-porphyrin systems. The four case studies highlight how computational modelling can correctly reproduce experimental product distributions, predicted reactivity trends and guide interpretation of electronic structures of complex systems. The case studies focus on the calculations of a variety of spectroscopic features of porphyrins and show how computational modelling gives important insight that explains the experimental spectra and can lead to the design of porphyrins with tuned properties. PMID:27070578

  9. Comparing biomarker measurements to a normal range: when ...

    EPA Pesticide Factsheets

    This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results. The National Exposure Research Laboratory’s (NERL’s) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA’s mission to protect human health and the environment. HEASD’s research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA’s strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  10. Alternative first-principles calculation of entropy for liquids

    DOE PAGES

    Meyer, Edmund R.; Ticknor, Christopher; Kress, Joel D.; ...

    2016-04-15

    Here, w present an alternative method for interpreting the velocity autocorrelation function (VACF) of a fluid with application to extracting the entropy in a manner similar to the methods developed by Lin et al. [J. Chem. Phys. 119, 11792 (2003)] and improved upon by Desjarlais [Phys. Rev. E 88, 062145 (2013)]. The liquid VACF is decomposed into two components, one gas and one solid, and each contribution's entropic portion is calculated. But, we fit both the gas and solid portions of the VACF in the time domain. This approach is applied to a single-component liquid (a two-phase model of liquidmore » Al at the melt line) and two different two-component systems: a superionic-to-superionic (bcc to fcc) phase transition in H 2 O at high temperatures and pressures and a metastable liquid state of MgO. Finally, for all three examples, comparisons to existing results in the literature demonstrate the validity of our alternative.« less

  11. Tin Oxide Crystals Exposed by Low-Energy {110} Facets for Enhanced Electrochemical Heavy Metal Ions Sensing: X-ray Absorption Fine Structure Experimental Combined with Density-Functional Theory Evidence.

    PubMed

    Jin, Zhen; Yang, Meng; Chen, Shao-Hua; Liu, Jin-Huai; Li, Qun-Xiang; Huang, Xing-Jiu

    2017-02-21

    Herein, we revealed that the electrochemical behaviors on the detection of heavy metal ions (HMIs) would largely rely on the exposed facets of SnO 2 nanoparticles. Compared to the high-energy {221} facet, the low-energy {110} facet of SnO 2 possessed better electrochemical performance. The adsorption/desorption tests, density-functional theory (DFT) calculations, and X-ray absorption fine structure (XAFS) studies showed that the lower barrier energy of surface diffusion on {110} facet was critical for the superior electrochemical property, which was favorable for the ions diffusion on the electrode, and further leading the enhanced electrochemical performance. Through the combination of experiments and theoretical calculations, a reliable interpretation of the mechanism for electroanalysis of HMIs with nanomaterials exposed by different crystal facets has been provided. Furthermore, it provides a deep insight into understanding the key factor to improve the electrochemical performance for HMIs detection, so as to design high-performance electrochemical sensors.

  12. The Design of a Quantitative Western Blot Experiment

    PubMed Central

    Taylor, Sean C.; Posch, Anton

    2014-01-01

    Western blotting is a technique that has been in practice for more than three decades that began as a means of detecting a protein target in a complex sample. Although there have been significant advances in both the imaging and reagent technologies to improve sensitivity, dynamic range of detection, and the applicability of multiplexed target detection, the basic technique has remained essentially unchanged. In the past, western blotting was used simply to detect a specific target protein in a complex mixture, but now journal editors and reviewers are requesting the quantitative interpretation of western blot data in terms of fold changes in protein expression between samples. The calculations are based on the differential densitometry of the associated chemiluminescent and/or fluorescent signals from the blots and this now requires a fundamental shift in the experimental methodology, acquisition, and interpretation of the data. We have recently published an updated approach to produce quantitative densitometric data from western blots (Taylor et al., 2013) and here we summarize the complete western blot workflow with a focus on sample preparation and data analysis for quantitative western blotting. PMID:24738055

  13. Automated lidar-derived canopy height estimates for the Upper Mississippi River System

    USGS Publications Warehouse

    Hlavacek, Enrika

    2015-01-01

    Land cover/land use (LCU) classifications serve as important decision support products for researchers and land managers. The LCU classifications produced by the U.S. Geological Survey’s Upper Midwest Environmental Sciences Center (UMESC) include canopy height estimates that are assigned through manual aerial photography interpretation techniques. In an effort to improve upon these techniques, this project investigated the use of high-density lidar data for the Upper Mississippi River System to determine canopy height. An ArcGIS tool was developed to automatically derive height modifier information based on the extent of land cover features for forest classes. The measurement of canopy height included a calculation of the average height from lidar point cloud data as well as the inclusion of a local maximum filter to identify individual tree canopies. Results were compared to original manually interpreted height modifiers and to field survey data from U.S. Forest Service Forest Inventory and Analysis plots. This project demonstrated the effectiveness of utilizing lidar data to more efficiently assign height modifier attributes to LCU classifications produced by the UMESC.

  14. Impaired or Not Impaired, That Is the Question: Navigating the Challenges Associated with Using Canadian Normative Data in a Comprehensive Test Battery That Contains American Tests

    PubMed Central

    Chevalier, Thérèse M.; Stewart, Garth; Nelson, Monty; McInerney, Robert J.; Brodie, Norman

    2016-01-01

    It has been well documented that IQ scores calculated using Canadian norms are generally 2–5 points lower than those calculated using American norms on the Wechsler IQ scales. However, recent findings have demonstrated that the difference may be significantly larger for individuals with certain demographic characteristics, and this has prompted discussion about the appropriateness of using the Canadian normative system with a clinical population in Canada. This study compared the interpretive effects of applying the American and Canadian normative systems in a clinical sample. We used a multivariate analysis of variance (ANOVA) to calculate differences between IQ and Index scores in a clinical sample, and mixed model ANOVAs to assess the pattern of differences across age and ability level. As expected, Full Scale IQ scores calculated using Canadian norms were systematically lower than those calculated using American norms, but differences were significantly larger for individuals classified as having extremely low or borderline intellectual functioning when compared with those who scored in the average range. Implications of clinically different conclusions for up to 52.8% of patients based on these discrepancies highlight a unique dilemma facing Canadian clinicians, and underscore the need for caution when choosing a normative system with which to interpret WAIS-IV results in the context of a neuropsychological test battery in Canada. Based on these findings, we offer guidelines for best practice for Canadian clinicians when interpreting data from neuropsychological test batteries that include different normative systems, and suggestions to assist with future test development. PMID:27246955

  15. Theoretical study of thorium monoxide for the electron electric dipole moment search: electronic properties of H(3)Δ(1) in ThO.

    PubMed

    Skripnikov, L V; Titov, A V

    2015-01-14

    Recently, improved limits on the electron electric dipole moment, and dimensionless constant, kT,P, characterizing the strength of the T,P-odd pseudoscalar-scalar electron-nucleus neutral current interaction in the H(3)Δ1 state of ThO molecule were obtained by the ACME collaboration [J. Baron et al., Science 343, 269 (2014)]. The interpretation of the experiment in terms of these fundamental quantities is based on the results of theoretical study of appropriate ThO characteristics, the effective electric field acting on electron, Eeff, and a parameter of the T,P-odd pseudoscalar-scalar interaction, WT,P, given in Skripnikov et al. [J. Chem. Phys. 139, 221103 (2013)] by St. Petersburg group. To reduce the uncertainties of the given limits, we report improved calculations of the molecular state-specific quantities Eeff, 81.5 GV/cm, and WT,P, 112 kHz, with the uncertainty within 7% of the magnitudes. Thus, the values recommended to use for the upper limits of the quantities are 75.8 GV/cm and 104 kHz, correspondingly. The hyperfine structure constant, molecule-frame dipole moment of the H(3)Δ1 state, and the H(3)Δ1 → X(1)Σ(+) transition energy which, in general, can serve as a measure of reliability of the obtained Eeff and WT,P values are also calculated. In addition, we report the first calculation of g-factor for the H(3)Δ1 state of ThO. The results are compared to the earlier experimental and theoretical studies, and a detailed analysis of uncertainties of the calculations is given.

  16. Full magnetic gradient tensor from triaxial aeromagnetic gradient measurements: Calculation and application

    NASA Astrophysics Data System (ADS)

    Luo, Yao; Wu, Mei-Ping; Wang, Ping; Duan, Shu-Ling; Liu, Hao-Jun; Wang, Jin-Long; An, Zhan-Feng

    2015-09-01

    The full magnetic gradient tensor (MGT) refers to the spatial change rate of the three field components of the geomagnetic field vector along three mutually orthogonal axes. The tensor is of use to geological mapping, resources exploration, magnetic navigation, and others. However, it is very difficult to measure the full magnetic tensor gradient using existing engineering technology. We present a method to use triaxial aeromagnetic gradient measurements for deriving the full MGT. The method uses the triaxial gradient data and makes full use of the variation of the magnetic anomaly modulus in three dimensions to obtain a self-consistent magnetic tensor gradient. Numerical simulations show that the full MGT data obtained with the proposed method are of high precision and satisfy the requirements of data processing. We selected triaxial aeromagnetic gradient data from the Hebei Province for calculating the full MGT. Data processing shows that using triaxial tensor gradient data allows to take advantage of the spatial rate of change of the total field in three dimensions and suppresses part of the independent noise in the aeromagnetic gradient. The calculated tensor components have improved resolution, and the transformed full tensor gradient satisfies the requirement of geological mapping and interpretation.

  17. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  18. Probabilistic numerics and uncertainty in computations.

    PubMed

    Hennig, Philipp; Osborne, Michael A; Girolami, Mark

    2015-07-08

    We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

  19. Cross-Over Between Different Symmetries

    NASA Astrophysics Data System (ADS)

    Frauendorf, S.

    2014-09-01

    The yrast states of even even vibrational and transitional nuclei are interpreted as a rotating condensate of interacting d-bosons. The corresponding semi-classical tidal wave concept is used for microscopic calculations of energies and E2 transition probabilities. The strong octupole correlations in the light rare earth and actinide nuclides are interpreted as rotation-induced condensation of interacting f-bosons.

  20. Principles of Quantitative MR Imaging with Illustrated Review of Applicable Modular Pulse Diagrams.

    PubMed

    Mills, Andrew F; Sakai, Osamu; Anderson, Stephan W; Jara, Hernan

    2017-01-01

    Continued improvements in diagnostic accuracy using magnetic resonance (MR) imaging will require development of methods for tissue analysis that complement traditional qualitative MR imaging studies. Quantitative MR imaging is based on measurement and interpretation of tissue-specific parameters independent of experimental design, compared with qualitative MR imaging, which relies on interpretation of tissue contrast that results from experimental pulse sequence parameters. Quantitative MR imaging represents a natural next step in the evolution of MR imaging practice, since quantitative MR imaging data can be acquired using currently available qualitative imaging pulse sequences without modifications to imaging equipment. The article presents a review of the basic physical concepts used in MR imaging and how quantitative MR imaging is distinct from qualitative MR imaging. Subsequently, the article reviews the hierarchical organization of major applicable pulse sequences used in this article, with the sequences organized into conventional, hybrid, and multispectral sequences capable of calculating the main tissue parameters of T1, T2, and proton density. While this new concept offers the potential for improved diagnostic accuracy and workflow, awareness of this extension to qualitative imaging is generally low. This article reviews the basic physical concepts in MR imaging, describes commonly measured tissue parameters in quantitative MR imaging, and presents the major available pulse sequences used for quantitative MR imaging, with a focus on the hierarchical organization of these sequences. © RSNA, 2017.

  1. Abnormal arterial flows by a distributed model of the fetal circulation.

    PubMed

    van den Wijngaard, Jeroen P H M; Westerhof, Berend E; Faber, Dirk J; Ramsay, Margaret M; Westerhof, Nico; van Gemert, Martin J C

    2006-11-01

    Modeling the propagation of blood pressure and flow along the fetoplacental arterial tree may improve interpretation of abnormal flow velocity waveforms in fetuses. The current models, however, either do not include a wide range of gestational ages or do not account for variation in anatomical, vascular, or rheological parameters. We developed a mathematical model of the pulsating fetoumbilical arterial circulation using Womersley's oscillatory flow theory and viscoelastic arterial wall properties. Arterial flow waves are calculated at different arterial locations from which the pulsatility index (PI) can be determined. We varied blood viscosity, placental and brain resistances, placental compliance, heart rate, stiffness of the arterial wall, and length of the umbilical arteries. The PI increases in the umbilical artery and decreases in the cerebral arteries, as a result of increasing placental resistance or decreasing brain resistance. Both changes in resistance decrease the flow through the placenta. An increased arterial stiffness increases the PIs in the entire fetoplacental circulation. Blood viscosity and peripheral bed compliance have limited influence on the flow profiles. Bradycardia and tachycardia increase and decrease the PI in all arteries, respectively. Umbilical arterial length has limited influence on the PI but affects the mean arterial pressure at the placental cord insertion. The model may improve the interpretation of arterial flow pulsations and thus may advance both the understanding of pathophysiological processes and clinical management.

  2. On The gamma-ray emission from Reticulum II and other dwarf galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hooper, Dan; Linden, Tim

    2015-09-01

    The recent discovery of ten new dwarf galaxy candidates by the Dark Energy Survey (DES) and the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) could increase the Fermi Gamma-Ray Space Telescope's sensitivity to annihilating dark matter particles, potentially enabling a definitive test of the dark matter interpretation of the long-standing Galactic Center gamma-ray excess. In this paper, we compare the previous analyses of Fermi data from the directions of the new dwarf candidates (including the relatively nearby Reticulum II) and perform our own analysis, with the goal of establishing the statistical significance of any gamma-ray signal from these sources.more » We confirm the presence of an excess from Reticulum II, with a spectral shape that is compatible with the Galactic Center signal. The significance of this emission is greater than that observed from 99.84% of randomly chosen high-latitude blank-sky locations, corresponding to a local detection significance of 3.2σ. We caution that any dark matter interpretation of this excess must be validated through observations of additional dwarf spheroidal galaxies, and improved calculations of the relative J-factor of dwarf spheroidal galaxies. We improve upon the standard blank-sky calibration approach through the use of multi-wavelength catalogs, which allow us to avoid regions that are likely to contain unresolved gamma-ray sources.« less

  3. Vibrational spectroscopic studies, NLO, HOMO-LUMO and electronic structure calculations of α,α,α-trichlorotoluene using HF and DFT.

    PubMed

    Govindarajan, M; Karabacak, M; Periandy, S; Xavier, S

    2012-08-01

    FT-IR and FT-Raman spectra of α,α,α-trichlorotoluene have been recorded and analyzed. The geometry, fundamental vibrational frequencies are interpreted with the aid of structure optimizations and normal coordinate force field calculations based on density functional theory (DFT) B3LYP/6-311++G(d,p) method and a comparative study between HF level and various basis sets combination. The fundamental vibrational wavenumbers as well as their intensities were calculated and a good agreement between observed and scaled calculated wavenumbers has been achieved. The complete vibrational assignments of wavenumbers are made on the basis of potential energy distribution (PED). The effects due to the substitutions of methyl group and halogen were investigated. The absorption energy and oscillator strength are calculated by time-dependent density functional theory (TD-DFT). The electric dipole moment, polarizability and the first hyperpolarizability values of the α,α,α-trichlorotoluene have been calculated. (1)H NMR chemical shifts were calculated by using the gauge independent atomic orbital (GIAO) method with HF and B3LYP methods with 6-311++G(d,p) basis set. Moreover, molecular electrostatic potential (MEP) and thermodynamic properties were performed. Mulliken and natural charges of the title molecule were also calculated and interpreted. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. The Application of Computational Chemistry to Problems in Mass Spectrometry

    EPA Science Inventory

    Quantum chemistry is capable of calculating a wide range of electronic and thermodynamic properties of interest to a chemist or physicist. Calculations can be used both to predict the results of future experiments and to aid in the interpretation of existing results. This paper w...

  5. FINITE EXPANSION METHOD FOR THE CALCULATION AND INTERPRETATION OF MOLECULAR ELECTROSTATIC POTENTIALS

    EPA Science Inventory

    Because it is useful to have the molecular electrostatic potential as an element in a complex scheme to assess the toxicity of large molecules, efficient and reliable methods are needed for the calculation and characterization of these potentials. A multicenter multipole expansio...

  6. Prediction and interpretation of infrared intensities of polymethylene chain molecules

    NASA Astrophysics Data System (ADS)

    Jona, P.; Gussoni, M.; Zerbi, G.

    1986-03-01

    We have calculated the IR intensities of some polymethylene chain molecules containing conformational defects or polar heads. Calculations provide spectroscopic markers of end-TG, GTG', GTG, GG and GGTGG defects. Further, a spectroscopical study of interactions between polar heads and alkyl chain is allowed.

  7. Overnight shift work: factors contributing to diagnostic discrepancies.

    PubMed

    Hanna, Tarek N; Loehfelm, Thomas; Khosa, Faisal; Rohatgi, Saurabh; Johnson, Jamlik-Omari

    2016-02-01

    The aims of the study are to identify factors contributing to preliminary interpretive discrepancies on overnight radiology resident shifts and apply this data in the context of known literature to draw parallels to attending overnight shift work schedules. Residents in one university-based training program provided preliminary interpretations of 18,488 overnight (11 pm–8 am) studies at a level 1 trauma center between July 1, 2013 and December 31, 2014. As part of their normal workflow and feedback, attendings scored the reports as major discrepancy, minor discrepancy, agree, and agree--good job. We retrospectively obtained the preliminary interpretation scores for each study. Total relative value units (RVUs) per shift were calculated as an indicator of overnight workload. The dataset was supplemented with information on trainee level, number of consecutive nights on night float, hour, modality, and per-shift RVU. The data were analyzed with proportional logistic regression and Fisher's exact test. There were 233 major discrepancies (1.26 %). Trainee level (senior vs. junior residents; 1.08 vs. 1.38 %; p < 0.05) and modality were significantly associated with performance. Increased workload affected more junior residents' performance, with R3 residents performing significantly worse on busier nights. Hour of the night was not significantly associated with performance, but there was a trend toward best performance at 2 am, with subsequent decreased accuracy throughout the remaining shift hours. Improved performance occurred after the first six night float shifts, presumably as residents acclimated to a night schedule. As overnight shift work schedules increase in popularity for residents and attendings, focused attention to factors impacting interpretative accuracy is warranted.

  8. Treatment of metaphor interpretation deficits subsequent to traumatic brain injury.

    PubMed

    Brownell, Hiram; Lundgren, Kristine; Cayer-Meade, Carol; Milione, Janet; Katz, Douglas I; Kearns, Kevin

    2013-01-01

    To improve oral interpretation of metaphors by patients with traumatic brain injury (TBI). Both single subject experimental design and group analysis. Patients' homes. Eight adult patients with moderate to severe traumatic brain injury sustained 3 to 20 years before testing. The Metaphor Training Program consisted typically of 10 baseline sessions, 3 to 9 1-hour sessions of structured intervention, and 10 posttraining baseline sessions. Training used extensive practice with simple graphic displays to illustrate semantic associations. Quality of orally produced metaphor interpretation and accuracy of line orientation judgments served as dependent measures obtained during baseline, training, posttraining, and at a 3- to 4-month follow-up. Untrained line orientation judgments provided a control measure. Group data showed significant improvement in metaphor interpretation but not in line orientation. Six of 8 patients individually demonstrated significant improvement in metaphor interpretation. Gains persisted for 3 of the 6 patients at the 3- to 4-month follow-up. The Metaphor Training Program can improve cognitive-communication performance for individuals with moderate to severe traumatic brain injury. Results support the potential for treating patients' residual cognitive-linguistic deficits.

  9. Using professional interpreters in undergraduate medical consultation skills teaching

    PubMed Central

    Bansal, Aarti; Swann, Jennifer; Smithson, William Henry

    2014-01-01

    The ability to work with interpreters is a core skill for UK medical graduates. At the University of Sheffield Medical School, this teaching was identified as a gap in the curriculum. Teaching was developed to use professional interpreters in role-play, based on evidence that professional interpreters improve health outcomes for patients with limited English proficiency. Other principles guiding the development of the teaching were an experiential learning format, integration to the core consultation skills curriculum, and sustainable delivery. The session was aligned with existing consultation skills teaching to retain the small-group experiential format and general practitioner (GP) tutor. Core curricular time was found through conversion of an existing consultation skills session. Language pairs of professional interpreters worked with each small group, with one playing patient and the other playing interpreter. These professional interpreters attended training in the scenarios so that they could learn to act as patient and family interpreter. GP tutors attended training sessions to help them facilitate the session. This enhanced the sustainability of the session by providing a cohort of tutors able to pass on their expertise to new staff through the existing shadowing process. Tutors felt that the involvement of professional interpreters improved student engagement. Student evaluation of the teaching suggests that the learning objectives were achieved. Faculty evaluation by GP tutors suggests that they perceived the teaching to be worthwhile and that the training they received had helped improve their own clinical practice in consulting through interpreters. We offer the following recommendations to others who may be interested in developing teaching on interpreted consultations within their core curriculum: 1) consider recruiting professional interpreters as a teaching resource; 2) align the teaching to existing consultation skills sessions to aid integration; and 3) invest in faculty development for successful and sustainable delivery. PMID:25473325

  10. Effective use of interpreters by family nurse practitioner students: is didactic curriculum enough?

    PubMed

    Phillips, Susanne J; Lie, Desiree; Encinas, Jennifer; Ahearn, Carol Sue; Tiso, Susan

    2011-05-01

    Nurse practitioners (NPs) care for patients with limited English proficiency (LEP). However, NP education for improving communication in interpreted encounters is not well reported. We report a single school study using standardized encounters within a clinical practice examination (CPX) to assess the adequacy of current curriculum. Entering family NP (FNP) students (n=26) participated in a baseline CPX case. They were assessed by standardized patients using the validated Interpreter Impact Rating Scale (IIRS) and Physician-Patient Interaction (PPI) scale, and by interpreters using the Interpreter Scale (IS).The case was re-administered to 31 graduating students following completion of existing curriculum. Primary outcome was aggregate change in skills comprising global IIRS, PPI and IS scores. Pre- and post-performance data were available for one class of 10 students. Secondary outcome was change in skill scores for this class. Mean aggregate global scores showed no significant improvement between scores at entry and graduation. For 10 students with pre- and post-performance data, there was no improvement in skill scores for any measure. Skill assessed on one measure worsened. FNP students show no improvement in skills in working with interpreters with the current curriculum. An enhanced curriculum is needed. ©2011 The Author(s) Journal compilation ©2011 American Academy of Nurse Practitioners.

  11. Comparative Study of Speckle Filtering Methods in PolSAR Radar Images

    NASA Astrophysics Data System (ADS)

    Boutarfa, S.; Bouchemakh, L.; Smara, Y.

    2015-04-01

    Images acquired by polarimetric SAR (PolSAR) radar systems are characterized by the presence of a noise called speckle. This noise has a multiplicative nature, corrupts both the amplitude and phase images, which complicates data interpretation, degrades segmentation performance and reduces the detectability of targets. Hence, the need to preprocess the images by adapted filtering methods before analysis.In this paper, we present a comparative study of implemented methods for reducing speckle in PolSAR images. These developed filters are: refined Lee filter based on the estimation of the minimum mean square error MMSE, improved Sigma filter with detection of strong scatterers based on the calculation of the coherency matrix to detect the different scatterers in order to preserve the polarization signature and maintain structures that are necessary for image interpretation, filtering by stationary wavelet transform SWT using multi-scale edge detection and the technique for improving the wavelet coefficients called SSC (sum of squared coefficients), and Turbo filter which is a combination between two complementary filters the refined Lee filter and the wavelet transform SWT. One filter can boost up the results of the other.The originality of our work is based on the application of these methods to several types of images: amplitude, intensity and complex, from a satellite or an airborne radar, and on the optimization of wavelet filtering by adding a parameter in the calculation of the threshold. This parameter will control the filtering effect and get a good compromise between smoothing homogeneous areas and preserving linear structures.The methods are applied to the fully polarimetric RADARSAT-2 images (HH, HV, VH, VV) acquired on Algiers, Algeria, in C-band and to the three polarimetric E-SAR images (HH, HV, VV) acquired on Oberpfaffenhofen area located in Munich, Germany, in P-band.To evaluate the performance of each filter, we used the following criteria: smoothing homogeneous areas, preserving edges and polarimetric information.Experimental results are included to illustrate the different implemented methods.

  12. Passive Super-Low Frequency electromagnetic prospecting technique

    NASA Astrophysics Data System (ADS)

    Wang, Nan; Zhao, Shanshan; Hui, Jian; Qin, Qiming

    2017-03-01

    The Super-Low Frequency (SLF) electromagnetic prospecting technique, adopted as a non-imaging remote sensing tool for depth sounding, is systematically proposed for subsurface geological survey. In this paper, we propose and theoretically illustrate natural source magnetic amplitudes as SLF responses for the first step. In order to directly calculate multi-dimensional theoretical SLF responses, modeling algorithms were developed and evaluated using the finite difference method. The theoretical results of three-dimensional (3-D) models show that the average normalized SLF magnetic amplitude responses were numerically stable and appropriate for practical interpretation. To explore the depth resolution, three-layer models were configured. The modeling results prove that the SLF technique is more sensitive to conductive objective layers than high resistive ones, with the SLF responses of conductive objective layers obviously showing uprising amplitudes in the low frequency range. Afterwards, we proposed an improved Frequency-Depth transformation based on Bostick inversion to realize the depth sounding by empirically adjusting two parameters. The SLF technique has already been successfully applied in geothermal exploration and coalbed methane (CBM) reservoir interpretation, which demonstrates that the proposed methodology is effective in revealing low resistive distributions. Furthermore, it siginificantly contributes to reservoir identification with electromagnetic radiation anomaly extraction. Meanwhile, the SLF interpretation results are in accordance with dynamic production status of CBM reservoirs, which means it could provide an economical, convenient and promising method for exploring and monitoring subsurface geo-objects.

  13. Tetraphenylporphyrin electronic properties: a combined theoretical and experimental study of thin films deposited by SuMBD.

    PubMed

    Nardi, Marco; Verucchi, Roberto; Corradi, Claudio; Pola, Marco; Casarin, Maurizio; Vittadini, Andrea; Iannotta, Salvatore

    2010-01-28

    Porphyrins and their metal complexes are particularly well suitable for applications in photoelectronics, sensing, energy production, because of their chemical, electronic and optical properties. The understanding of the electronic properties of the pristine molecule is of great relevance for the study and application of the wide class of these compounds. This is notably important for the recently achieved in-vacuo synthesis of organo-metallic thin films directly from the pure free base organic-inorganic precursors in the vapor phase, and its interpretation by means of surface electron spectroscopies. We report on a combined experimental and theoretical study of the physical/chemical properties of tetraphenylporphyrin, H(2)TPP, deposited on the SiO(2)/Si(100) native oxide surface by supersonic molecular beam deposition (SuMBD). Valence states and 1s core level emissions of carbon and nitrogen have been investigated with surface photoelectron spectroscopies by using synchrotron radiation light. The interpretation of the spectra has been guided by density functional numerical experiments on the gas-phase molecule. Non-relativistic calculations were carried out for the valence states, whereas a two component relativistic approach in the zeroth-order regular approximation was used to investigate the core levels. The good agreement between theoretical and experimental analysis results in a comprehensive overview of the chemical properties of the H(2)TPP molecule, highly improving reliability in the interpretation of experimental photoemission spectra.

  14. Calculations of Wall Effects on Propeller Noise

    NASA Technical Reports Server (NTRS)

    Baumeister, Kenneth J.; Eversman, Walter

    1987-01-01

    Reverberations affect sound levels in wind tunnels. Report describes calculations of acoustic field of propeller in wind tunnel having walls of various degrees of softness. Understanding provided by this and related studies necessary for correct interpretation of wind-tunnel measurements of noise generated by high speed, highly loaded, multiple-blade turbopropellers.

  15. Chi-Square Statistics, Tests of Hypothesis and Technology.

    ERIC Educational Resources Information Center

    Rochowicz, John A.

    The use of technology such as computers and programmable calculators enables students to find p-values and conduct tests of hypotheses in many different ways. Comprehension and interpretation of a research problem become the focus for statistical analysis. This paper describes how to calculate chisquare statistics and p-values for statistical…

  16. Stream gradient Hotspot and Cluster Analysis (SL-HCA) for improving the longitudinal profiles metrics

    NASA Astrophysics Data System (ADS)

    Troiani, Francesco; Piacentini, Daniela; Seta Marta, Della

    2016-04-01

    Many researches successfully focused on stream longitudinal profiles analysis through Stream Length-gradient (SL) index for detecting, at different spatial scales, either tectonic structures or hillslope processes. The analysis and interpretation of spatial variability of SL values, both at a regional and local scale, is often complicated due to the concomitance of different factors generating SL anomalies, including the bedrock composition. The creation of lithologically-filtered SL maps is often problematic in areas where homogeneously surveyed geological maps, with a sufficient resolution are unavailable. Moreover, both the SL map classification and the unbiased anomaly detection are rather difficult. For instance, which is the best threshold to define the anomalous SL values? Further, is there a minimum along-channel extent of anomalous SL values for objectively defining over-steeped segments on long-profiles? This research investigates the relevance and potential of a new approach based on Hotspot and Cluster Analysis of SL values (SL-HCA) for detecting knickzones on long-profiles at a regional scale and for fine-tuning the interpretation of their geological-geomorphological meaning. We developed this procedure within a 2800 km2-wide area located in the mountainous sector of the Northern Apennines of Italy. The Getis-Ord Gi∗ statistic is applied for the SL-HCA approach. The value of SL, calculated starting from a 5x5 m Digital Elevation Model, is used as weighting factor and the Gi∗ index is calculated for each 50 m-long channel segment for the whole fluvial system. The outcomes indicate that high positive Gi∗ values imply the clustering of SL anomalies, thus the occurrence of knickzones on the stream long-profiles. Results show that high and very high Gi* values (i.e. values beyond two standard deviations from the mean) correlate well with the principal knickzones detected with existent lithologically-filtered SL maps. Field checks and remote sensing analysis conducted on 52 clusters of high and very high Gi* values indicate that mass movement of slope material represents the dominant process producing over-steeped long-profiles along connected streams, whereas the litho-structure accounts for the main anomalies along disconnected steams. Tectonic structures generally provide to the largest clusters. Our results demonstrate that SL-HCA maps have the same potential of lithologically-filtered SL maps for detecting knickzones due to hillslope processes and/or tectonic structures. The reduced-complexity model derived from SL-HCA approach highly improve the readability of the morphometric outcomes, thus the interpretation at a regional scale of the geological-geomorphological meaning of over-steeped segments on long-profiles. SL-HCA maps are useful to investigate and better interpret knickzones within regions poorly covered by geological data and where field surveys are difficult to be performed.

  17. Quantum chemistry and excited states: First investigations on pyrene-like molecules

    NASA Technical Reports Server (NTRS)

    Parisel, Olivier; Ellinger, Y.

    1994-01-01

    Although the calculations are expected to be accurate within 10%, it follows then that there cannot be proposed unquestionable one-to-one attribution due to the density of the DIB's (Diffuse Interstellar Bands). Nevertheless, it has been shown that if one is interested in the experimental study of methyl-pyrene cations, for example, then, the most promising candidate is the 1-methyl isomer: this isomer has been investigated by d'Hendecourt and Leger (1993,1994) and their spectrum shows very striking features in very good agreement with both our calculations and a few DIB's. However, 1-methyl-pyrene cation is not the only product susceptible of being formed in this experiment, and further investigations are in progress to give a complete interpretations of the results. This preliminary report on pyrene-like molecules illustrates the role that theoretical calculations can ply in both the design and the interpretation of experiments.

  18. Charting a New Course. NAI National Interpreters Workshop. Proceedings of the Annual Conference (San Diego, California, October 24-28, 1988).

    ERIC Educational Resources Information Center

    Erikson, Debra M., Ed.

    Selected presentations in this publication include: "AAPRCO, Amtrak, the Railroads, and Interpretation"; "Check It Out!"; "Wildlife Rehabilitation as an Interpretive Tool"; "Improving the Monorail Tour"; "Interpreting Our Heretics"; "Evaluation: A Critical Management Process"; "A…

  19. Accuracy of ECG interpretation in competitive athletes: the impact of using standised ECG criteria.

    PubMed

    Drezner, Jonathan A; Asif, Irfan M; Owens, David S; Prutkin, Jordan M; Salerno, Jack C; Fean, Robyn; Rao, Ashwin L; Stout, Karen; Harmon, Kimberly G

    2012-04-01

    Interpretation of ECGs in athletes is complicated by physiological changes related to training. The purpose of this study was to determine the accuracy of ECG interpretation in athletes among different physician specialties, with and without use of a standised ECG criteria tool. Physicians were asked to interpret 40 ECGs (28 normal ECGs from college athletes randomised with 12 abnormal ECGs from individuals with known ciovascular pathology) and classify each ECG as (1) 'normal or variant--no further evaluation and testing needed' or (2) 'abnormal--further evaluation and testing needed.' After reading the ECGs, participants received a two-page ECG criteria tool to guide interpretation of the ECGs again. A total of 60 physicians participated: 22 primary care (PC) residents, 16 PC attending physicians, 12 sports medicine (SM) physicians and 10 ciologists. At baseline, the total number of ECGs correctly interpreted was PC residents 73%, PC attendings 73%, SM physicians 78% and ciologists 85%. With use of the ECG criteria tool, all physician groups significantly improved their accuracy (p<0.0001): PC residents 92%, PC attendings 90%, SM physicians 91% and ciologists 96%. With use of the ECG criteria tool, specificity improved from 70% to 91%, sensitivity improved from 89% to 94% and there was no difference comparing ciologists versus all other physicians (p=0.053). Providing standised criteria to assist ECG interpretation in athletes significantly improves the ability to accurately distinguish normal from abnormal findings across physician specialties, even in physicians with little or no experience.

  20. Learning from samples of one or fewer*

    PubMed Central

    March, J; Sproull, L; Tamuz, M

    2003-01-01

    

 Organizations learn from experience. Sometimes, however, history is not generous with experience. We explore how organizations convert infrequent events into interpretations of history, and how they balance the need to achieve agreement on interpretations with the need to interpret history correctly. We ask what methods are used, what problems are involved, and what improvements might be made. Although the methods we observe are not guaranteed to lead to consistent agreement on interpretations, valid knowledge, improved organizational performance, or organizational survival, they provide possible insights into the possibilities for and problems of learning from fragments of history. PMID:14645764

  1. Second-Order Factor Analysis as a Validity Assessment Tool: A Case Study Example Involving Perceptions of Stereotypic Love.

    ERIC Educational Resources Information Center

    Borrello, Gloria M.; Thompson, Bruce

    The calculation of second-order results in the validity assessment of measures and some useful interpretation aids are presented. First-order and second-order results give different and informative pictures of data dynamics. Several aspects of good practice in interpretation of second-order results are presented using data from 487 subjects…

  2. How to generate and interpret fire characteristics charts for surface and crown fire behavior

    Treesearch

    Patricia L. Andrews; Faith Ann Heinsch; Luke Schelvan

    2011-01-01

    A fire characteristics chart is a graph that presents primary related fire behavior characteristics-rate of spread, flame length, fireline intensity, and heat per unit area. It helps communicate and interpret modeled or observed fire behavior. The Fire Characteristics Chart computer program plots either observed fire behavior or values that have been calculated by...

  3. New standards for reducing gravity data: The North American gravity database

    USGS Publications Warehouse

    Hinze, W. J.; Aiken, C.; Brozena, J.; Coakley, B.; Dater, D.; Flanagan, G.; Forsberg, R.; Hildenbrand, T.; Keller, Gordon R.; Kellogg, J.; Kucks, R.; Li, X.; Mainville, A.; Morin, R.; Pilkington, M.; Plouff, D.; Ravat, D.; Roman, D.; Urrutia-Fucugauchi, J.; Veronneau, M.; Webring, M.; Winester, D.

    2005-01-01

    The North American gravity database as well as databases from Canada, Mexico, and the United States are being revised to improve their coverage, versatility, and accuracy. An important part of this effort is revising procedures for calculating gravity anomalies, taking into account our enhanced computational power, improved terrain databases and datums, and increased interest in more accurately defining long-wavelength anomaly components. Users of the databases may note minor differences between previous and revised database values as a result of these procedures. Generally, the differences do not impact the interpretation of local anomalies but do improve regional anomaly studies. The most striking revision is the use of the internationally accepted terrestrial ellipsoid for the height datum of gravity stations rather than the conventionally used geoid or sea level. Principal facts of gravity observations and anomalies based on both revised and previous procedures together with germane metadata will be available on an interactive Web-based data system as well as from national agencies and data centers. The use of the revised procedures is encouraged for gravity data reduction because of the widespread use of the global positioning system in gravity fieldwork and the need for increased accuracy and precision of anomalies and consistency with North American and national databases. Anomalies based on the revised standards should be preceded by the adjective "ellipsoidal" to differentiate anomalies calculated using heights with respect to the ellipsoid from those based on conventional elevations referenced to the geoid. ?? 2005 Society of Exploration Geophysicists. All rights reserved.

  4. Ab initio calculation of 1H, 17O, 27Al and 29Si NMR parameters, vibrational frequencies and bonding energetics in hydrous silica and Na-aluminosilicate glasses

    NASA Astrophysics Data System (ADS)

    Kubicki, J. D.; Sykes, D. G.

    2004-10-01

    Ab initio, molecular orbital (MO) calculations were performed on model systems of SiO 2, NaAlSi 3O 8 (albite), H 2O-SiO 2 and H 2O-NaAlSi 3O 8 glasses. Model nuclear magnetic resonance (NMR) isotropic chemical shifts (δ iso) for 1H, 17O, 27Al and 29Si are consistent with experimental data for the SiO 2, NaAlSi 3O 8, H 2O-SiO 2 systems where structural interpretations of the NMR peak assignments are accepted. For H 2O-NaSi 3AlO 8 glass, controversy has surrounded the interpretation of NMR and infrared (IR) spectra. Calculated δ iso1H, δ iso17O, δ iso27Al and δ iso29Si are consistent with the interpretation of Kohn et al. (1992) that Si-(OH)-Al linkages are responsible for the observed peaks in hydrous Na-aluminosilicate glasses. In addition, a theoretical vibrational frequency associated with the Kohn et al. (1992) model agrees well with the observed shoulder near 900 cm -1 in the IR and Raman spectra of hydrous albite glasses. MO calculations suggest that breaking this Si-(OH)-Al linkage requires ˜+56 to +82 kJ/mol which is comparable to the activation energies for viscous flow in hydrous aluminosilicate melts.

  5. Exploring the Relationship Between Eye Movements and Electrocardiogram Interpretation Accuracy

    NASA Astrophysics Data System (ADS)

    Davies, Alan; Brown, Gavin; Vigo, Markel; Harper, Simon; Horseman, Laura; Splendiani, Bruno; Hill, Elspeth; Jay, Caroline

    2016-12-01

    Interpretation of electrocardiograms (ECGs) is a complex task involving visual inspection. This paper aims to improve understanding of how practitioners perceive ECGs, and determine whether visual behaviour can indicate differences in interpretation accuracy. A group of healthcare practitioners (n = 31) who interpret ECGs as part of their clinical role were shown 11 commonly encountered ECGs on a computer screen. The participants’ eye movement data were recorded as they viewed the ECGs and attempted interpretation. The Jensen-Shannon distance was computed for the distance between two Markov chains, constructed from the transition matrices (visual shifts from and to ECG leads) of the correct and incorrect interpretation groups for each ECG. A permutation test was then used to compare this distance against 10,000 randomly shuffled groups made up of the same participants. The results demonstrated a statistically significant (α  0.05) result in 5 of the 11 stimuli demonstrating that the gaze shift between the ECG leads is different between the groups making correct and incorrect interpretations and therefore a factor in interpretation accuracy. The results shed further light on the relationship between visual behaviour and ECG interpretation accuracy, providing information that can be used to improve both human and automated interpretation approaches.

  6. Interpreter services in emergency medicine.

    PubMed

    Chan, Yu-Feng; Alagappan, Kumar; Rella, Joseph; Bentley, Suzanne; Soto-Greene, Marie; Martin, Marcus

    2010-02-01

    Emergency physicians are routinely confronted with problems associated with language barriers. It is important for emergency health care providers and the health system to strive for cultural competency when communicating with members of an increasingly diverse society. Possible solutions that can be implemented include appropriate staffing, use of new technology, and efforts to develop new kinds of ties to the community served. Linguistically specific solutions include professional interpretation, telephone interpretation, the use of multilingual staff members, the use of ad hoc interpreters, and, more recently, the use of mobile computer technology at the bedside. Each of these methods carries a specific set of advantages and disadvantages. Although professionally trained medical interpreters offer improved communication, improved patient satisfaction, and overall cost savings, they are often underutilized due to their perceived inefficiency and the inconclusive results of their effect on patient care outcomes. Ultimately, the best solution for each emergency department will vary depending on the population served and available resources. Access to the multiple interpretation options outlined above and solid support and commitment from hospital institutions are necessary to provide proper and culturally competent care for patients. Appropriate communications inclusive of interpreter services are essential for culturally and linguistically competent provider/health systems and overall improved patient care and satisfaction. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  7. Understanding Reflectance Anisotropy: Surface-structure signatures and bulk-related features

    NASA Astrophysics Data System (ADS)

    Gero Schmidt, W.

    2000-03-01

    Reflectance anisotropy spectroscopy (RAS) is becoming an increasingly important tool for in situ control of semiconductor processing with real-time feedback. The understanding and interpretation of the measured spectra, however, has been hampered by relatively slow theoretical progress. Using a massively parallel real-space multigrid technique [1] and ab initio pseudopotentials we calculated the optical spectra of a variety of III-V(001) growth structures and stepped Si(111):H surfaces. Our results agree well with experiment, notably with respect to the stoichiometric changes induced by different surface preparations. We identify two distinct sources for the optical anisotropy: (i) highly structure-dependent features are caused by transitions involving electronic surface states, and (ii) derivative-like oscillations or peaks at the bulk critical point energies arise from transitions between surface-modified bulk wave functions. The latter are nearly independent from the actual surface structure. The agreement between the calculated and measured spectra is further improved by applying quasi-particle corrections obtained from numerically efficient, simplified GW calculations [2]. The combination of converged first-principles calculations with an approximate treatment of many-particle effects allows the reliable identification of ``surface-structure fingerprints'' in the optical spectra, paving the way for the exploitation of their rich technological potential. [1ex] [1] EL Briggs, DJ Sullivan, J Bernholc, Phys. Rev. B 54, 14362 (1996). [2] F Bechstedt, R Del Sole, G Cappellini, L Reining, Solid State Commun. 84, 765 (1992).

  8. Analytical and experimental investigation of fatigue in a sheet specimen with an interference-fit bolt

    NASA Technical Reports Server (NTRS)

    Crews, J. H., Jr.

    1975-01-01

    A fatigue analysis, based on finite-element calculations and fatigue tests, was conducted for an aluminum-alloy sheet specimen with a steel interference-fit bolt. The stress analysis of the region near the bolt hole showed that the beneficial effect of an interference-fit bolt can be interpreted as the combined result of two effects: (1) load transfer through the bolt and (2) the compressive interference stresses in the sheet. Results of the fatigue tests show that progressively higher interference levels produced longer fatigue lives. The tests also show that a high level of interference prevents fretting at the bolt-sheet interface and that interferences larger than this level produced little additional improvement in fatigue life.

  9. Interpreting Abstract Interpretations in Membership Equational Logic

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Rosu, Grigore

    2001-01-01

    We present a logical framework in which abstract interpretations can be naturally specified and then verified. Our approach is based on membership equational logic which extends equational logics by membership axioms, asserting that a term has a certain sort. We represent an abstract interpretation as a membership equational logic specification, usually as an overloaded order-sorted signature with membership axioms. It turns out that, for any term, its least sort over this specification corresponds to its most concrete abstract value. Maude implements membership equational logic and provides mechanisms to calculate the least sort of a term efficiently. We first show how Maude can be used to get prototyping of abstract interpretations "for free." Building on the meta-logic facilities of Maude, we further develop a tool that automatically checks and abstract interpretation against a set of user-defined properties. This can be used to select an appropriate abstract interpretation, to characterize the specified loss of information during abstraction, and to compare different abstractions with each other.

  10. Recalculated probability of M ≥ 7 earthquakes beneath the Sea of Marmara, Turkey

    USGS Publications Warehouse

    Parsons, T.

    2004-01-01

    New earthquake probability calculations are made for the Sea of Marmara region and the city of Istanbul, providing a revised forecast and an evaluation of time-dependent interaction techniques. Calculations incorporate newly obtained bathymetric images of the North Anatolian fault beneath the Sea of Marmara [Le Pichon et al., 2001; Armijo et al., 2002]. Newly interpreted fault segmentation enables an improved regional A.D. 1500-2000 earthquake catalog and interevent model, which form the basis for time-dependent probability estimates. Calculations presented here also employ detailed models of coseismic and postseismic slip associated with the 17 August 1999 M = 7.4 Izmit earthquake to investigate effects of stress transfer on seismic hazard. Probability changes caused by the 1999 shock depend on Marmara Sea fault-stressing rates, which are calculated with a new finite element model. The combined 2004-2034 regional Poisson probability of M≥7 earthquakes is ~38%, the regional time-dependent probability is 44 ± 18%, and incorporation of stress transfer raises it to 53 ± 18%. The most important effect of adding time dependence and stress transfer to the calculations is an increase in the 30 year probability of a M ??? 7 earthquake affecting Istanbul. The 30 year Poisson probability at Istanbul is 21%, and the addition of time dependence and stress transfer raises it to 41 ± 14%. The ranges given on probability values are sensitivities of the calculations to input parameters determined by Monte Carlo analysis; 1000 calculations are made using parameters drawn at random from distributions. Sensitivities are large relative to mean probability values and enhancements caused by stress transfer, reflecting a poor understanding of large-earthquake aperiodicity.

  11. The Effectiveness of Using Interactive Multimedia in Improving the Concept of Fashion Design and Its Application in The Making of Digital Fashion Design

    NASA Astrophysics Data System (ADS)

    Wiana, W.

    2018-02-01

    This research is related to the effort to design a more representative learning system to improve the learning result of digital fashion design, through the development of interactive multimedia based on motion graphic. This research is aimed to know the effect of interactive multimedia application based on motion graphic to increase the mastery of the concept and skill of the students to making fashion designing in digital format. The research method used is quasi experiment with research design of Non-equivalent Control Group Design. The lectures are conducted in two different classes, namely class A as the Experimental Class and class B as the Control Class. From the calculation result after interpreted using Normalize Gain, there is an increase of higher learning result in student with interactive learning based on motion graphic, compared with student achievement on conventional learning. In this research, interactive multimedia learning based on motion graphic is effective toward the improvement of student learning in concept mastering indicator and on the aspect of making fashion design in digital format.

  12. Long-term effects of transference interpretation in dynamic psychotherapy of personality disorders.

    PubMed

    Høglend, P; Dahl, H-S; Hersoug, A G; Lorentzen, S; Perry, J C

    2011-10-01

    Only a few treatment studies of personality disorders (PD) patients are on longer-term psychotherapy, general outcome measures are used, and follow-up periods are usually short. More studies of long-term therapies, using outcome measures of core psychopathology, are needed. This study is a dismantling randomized controlled clinical trial, specifically designed to study long-term effects of transference interpretation. Forty-six patients with mainly cluster C personality disorders were randomly assigned to 1 year of dynamic psychotherapy with or without transference interpretations. The outcome measures were remission from PD, improvement in interpersonal functioning, and use of mental health resources in the 3-year period after treatment termination. After therapy with transference interpretation PD-patients improved significantly more in core psychopathology and interpersonal functioning, the drop-out rate was reduced to zero, and use of health services was reduced to 50%, compared to therapy without this ingredient. Three years after treatment termination, 73% no longer met diagnostic criteria for any PD in the transference group, compared to 44% in the comparison group. PD-patients with co-morbid disorders improved in both treatment arms in this study. However, transference interpretation improved outcome substantially more. Long-term psychotherapy that includes transference interpretation is an effective treatment for cluster C personality disorders and milder cluster B personality disorders. Copyright © 2010 Elsevier Masson SAS. All rights reserved.

  13. Guide for Calculating and Interpreting Effect Sizes and Confidence Intervals in Intellectual and Developmental Disability Research Studies

    ERIC Educational Resources Information Center

    Dunst, Carl J.; Hamby, Deborah W.

    2012-01-01

    This paper includes a nontechnical description of methods for calculating effect sizes in intellectual and developmental disability studies. Different hypothetical studies are used to illustrate how null hypothesis significance testing (NHST) and effect size findings can result in quite different outcomes and therefore conflicting results. Whereas…

  14. On the statistical significance of excess events: Remarks of caution and the need for a standard method of calculation

    NASA Technical Reports Server (NTRS)

    Staubert, R.

    1985-01-01

    Methods for calculating the statistical significance of excess events and the interpretation of the formally derived values are discussed. It is argued that a simple formula for a conservative estimate should generally be used in order to provide a common understanding of quoted values.

  15. Role of Molecular Structure on X-ray Diffraction in Thermotropic Uniaxial and Biaxial Nematic Liquid Crystal Phases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acharya, Bharat R.; Kang, Shin-Woong; Prasad, Veena

    2009-08-27

    X-ray diffraction is one of the most definitive methods to determine the structure of condensed matter phases, and it has been applied to unequivocally infer the structures of conventional calamitic and lyotropic liquid crystals. With the advent of bent-core and tetrapodic mesogens and the discovery of the biaxial nematic phase in them, the experimental results require more careful interpretation and analysis. Here, we present ab-initio calculations of X-ray diffraction patterns in the isotropic, uniaxial nematic, and biaxial nematic phases of bent-core mesogens. A simple Meier-Saupe-like molecular distribution function is employed to describe both aligned and unaligned mesophases. The distribution functionmore » is decomposed into two, polar and azimuthal, distribution functions to calculate the effect of the evolution of uniaxial and biaxial nematic orientational order. The calculations provide satisfactory semiquantitative interpretations of experimental results. The calculations presented here should provide a pathway to more refined and quantitative analysis of X-ray diffraction data from the biaxial nematic phase.« less

  16. Role of Molecular Structure on X-ray Diffraction in Uniaxial and Biaxial Phases of Thermotropic Liquid Crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acharya, Bharat R.; Kang, Shin-Woong; Prasad, Veena

    2009-04-29

    X-ray diffraction is one of the most definitive methods to determine the structure of condensed matter phases, and it has been applied to unequivocally infer the structures of conventional calamitic and lyotropic liquid crystals. With the advent of bent-core and tetrapodic mesogens and the discovery of the biaxial nematic phase in them, the experimental results require more careful interpretation and analysis. Here, we present ab-initio calculations of X-ray diffraction patterns in the isotropic, uniaxial nematic, and biaxial nematic phases of bent-core mesogens. A simple Meier-Saupe-like molecular distribution function is employed to describe both aligned and unaligned mesophases. The distribution functionmore » is decomposed into two, polar and azimuthal, distribution functions to calculate the effect of the evolution of uniaxial and biaxial nematic orientational order. The calculations provide satisfactory semiquantitative interpretations of experimental results. The calculations presented here should provide a pathway to more refined and quantitative analysis of X-ray diffraction data from the biaxial nematic phase.« less

  17. Structure and vibrational spectra of melaminium bis(trifluoroacetate) trihydrate: FT-IR, FT-Raman and quantum chemical calculations

    NASA Astrophysics Data System (ADS)

    Sangeetha, V.; Govindarajan, M.; Kanagathara, N.; Marchewka, M. K.; Gunasekaran, S.; Anbalagan, G.

    Melaminium bis(trifluoroacetate) trihydrate (MTFA), an organic material has been synthesized and single crystals of MTFA have been grown by the slow solvent evaporation method at room temperature. X-ray powder diffraction analysis confirms that MTFA crystal belongs to the monoclinic system with space group P2/c. The molecular geometry, vibrational frequencies and intensity of the vibrational bands have been interpreted with the aid of structure optimization based on density functional theory (DFT) B3LYP method with 6-311G(d,p) and 6-311++G(d,p) basis sets. The X-ray diffraction data have been compared with the data of optimized molecular structure. The theoretical results show that the crystal structure can be reproduced by optimized geometry and the vibrational frequencies show good agreement with the experimental values. The nuclear magnetic resonance (NMR) chemical shift of the molecule has been calculated by the gauge independent atomic orbital (GIAO) method and compared with experimental results. HOMO-LUMO, and other related molecular and electronic properties are calculated. The Mulliken and NBO charges have also been calculated and interpreted.

  18. Artificial neural network retrained to detect myocardial ischemia using a Japanese multicenter database.

    PubMed

    Nakajima, Kenichi; Okuda, Koichi; Watanabe, Satoru; Matsuo, Shinro; Kinuya, Seigo; Toth, Karin; Edenbrandt, Lars

    2018-03-07

    An artificial neural network (ANN) has been applied to detect myocardial perfusion defects and ischemia. The present study compares the diagnostic accuracy of a more recent ANN version (1.1) with the initial version 1.0. We examined 106 patients (age, 77 ± 10 years) with coronary angiographic findings, comprising multi-vessel disease (≥ 50% stenosis) (52%) or old myocardial infarction (27%), or who had undergone coronary revascularization (30%). The ANN versions 1.0 and 1.1 were trained in Sweden (n = 1051) and Japan (n = 1001), respectively, using 99m Tc-methoxyisobutylisonitrile myocardial perfusion images. The ANN probabilities (from 0.0 to 1.0) of stress defects and ischemia were calculated in candidate regions of abnormalities. The diagnostic accuracy was compared using receiver-operating characteristics (ROC) analysis and the calculated area under the ROC curve (AUC) using expert interpretation as the gold standard. Although the AUC for stress defects was 0.95 and 0.93 (p = 0.27) for versions 1.1 and 1.0, respectively, that for detecting ischemia was significantly improved in version 1.1 (p = 0.0055): AUC 0.96 for version 1.1 (sensitivity 87%, specificity 96%) vs. 0.89 for version 1.0 (sensitivity 78%, specificity 97%). The improvement in the AUC shown by version 1.1 was also significant for patients with neither coronary revascularization nor old myocardial infarction (p = 0.0093): AUC = 0.98 for version 1.1 (sensitivity 88%, specificity 100%) and 0.88 for version 1.0 (sensitivity 76%, specificity 100%). Intermediate ANN probability between 0.1 and 0.7 was more often calculated by version 1.1 compared with version 1.0, which contributed to the improved diagnostic accuracy. The diagnostic accuracy of the new version was also improved in patients with either single-vessel disease or no stenosis (n = 47; AUC, 0.81 vs. 0.66 vs. p = 0.0060) when coronary stenosis was used as a gold standard. The diagnostic ability of the ANN version 1.1 was improved by retraining using the Japanese database, particularly for identifying ischemia.

  19. Evaluation of the BD BACTEC FX blood volume monitoring system as a continuous quality improvement measure.

    PubMed

    Coorevits, L; Van den Abeele, A-M

    2015-07-01

    The yield of blood cultures is proportional to the volume of blood cultured. We evaluated an automatic blood volume monitoring system, recently developed by Becton Dickinson within its BACTEC EpiCenter module, that calculates mean volumes of negative aerobic bottles and generates boxplots and histograms. First, we evaluated the filling degree of 339 aerobic glass blood cultures by calculating the weight-based volume for each bottle. A substantial amount of the bottles (48.3%) were inadequately filled. Evaluation of the accuracy of the monitoring system showed a mean bias of -1.4 mL (-15.4%). Additional evaluation, using the amended software on 287 aerobic blood culture bottles, resulted in an acceptable mean deviation of -0.3 mL (-3.3%). The new software version was also tested on 200 of the recently introduced plastic bottles, which will replace the glass bottles in the near future, showing a mean deviation of +2.8 mL (+26.7%). In conclusion, the mean calculated volumes can be used for the training of a single phlebotomist. However, filling problems appear to be masked when using them for phlebotomist groups or on wards. Here, visual interpretation of boxplots and histograms can serve as a useful tool to observe the spread of the filling degrees and to develop a continuous improvement program. Re-adjustment of the software has proven to be necessary for use with plastic bottles. Due to our findings, BD has developed further adjustments to the software for validated use with plastic bottles, which will be released soon.

  20. Explicit tracking of uncertainty increases the power of quantitative rule-of-thumb reasoning in cell biology.

    PubMed

    Johnston, Iain G; Rickett, Benjamin C; Jones, Nick S

    2014-12-02

    Back-of-the-envelope or rule-of-thumb calculations involving rough estimates of quantities play a central scientific role in developing intuition about the structure and behavior of physical systems, for example in so-called Fermi problems in the physical sciences. Such calculations can be used to powerfully and quantitatively reason about biological systems, particularly at the interface between physics and biology. However, substantial uncertainties are often associated with values in cell biology, and performing calculations without taking this uncertainty into account may limit the extent to which results can be interpreted for a given problem. We present a means to facilitate such calculations where uncertainties are explicitly tracked through the line of reasoning, and introduce a probabilistic calculator called CALADIS, a free web tool, designed to perform this tracking. This approach allows users to perform more statistically robust calculations in cell biology despite having uncertain values, and to identify which quantities need to be measured more precisely to make confident statements, facilitating efficient experimental design. We illustrate the use of our tool for tracking uncertainty in several example biological calculations, showing that the results yield powerful and interpretable statistics on the quantities of interest. We also demonstrate that the outcomes of calculations may differ from point estimates when uncertainty is accurately tracked. An integral link between CALADIS and the BioNumbers repository of biological quantities further facilitates the straightforward location, selection, and use of a wealth of experimental data in cell biological calculations. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  1. From Computer-interpretable Guidelines to Computer-interpretable Quality Indicators: A Case for an Ontology.

    PubMed

    White, Pam; Roudsari, Abdul

    2014-01-01

    In the United Kingdom's National Health Service, quality indicators are generally measured electronically by using queries and data extraction, resulting in overlap and duplication of query components. Electronic measurement of health care quality indicators could be improved through an ontology intended to reduce duplication of effort during healthcare quality monitoring. While much research has been published on ontologies for computer-interpretable guidelines, quality indicators have lagged behind. We aimed to determine progress on the use of ontologies to facilitate computer-interpretable healthcare quality indicators. We assessed potential for improvements to computer-interpretable healthcare quality indicators in England. We concluded that an ontology for a large, diverse set of healthcare quality indicators could benefit the NHS and reduce workload, with potential lessons for other countries.

  2. Analysis of reaction cross-section production in neutron induced fission reactions on uranium isotope using computer code COMPLET.

    PubMed

    Asres, Yihunie Hibstie; Mathuthu, Manny; Birhane, Marelgn Derso

    2018-04-22

    This study provides current evidence about cross-section production processes in the theoretical and experimental results of neutron induced reaction of uranium isotope on projectile energy range of 1-100 MeV in order to improve the reliability of nuclear stimulation. In such fission reactions of 235 U within nuclear reactors, much amount of energy would be released as a product that able to satisfy the needs of energy to the world wide without polluting processes as compared to other sources. The main objective of this work is to transform a related knowledge in the neutron-induced fission reactions on 235 U through describing, analyzing and interpreting the theoretical results of the cross sections obtained from computer code COMPLET by comparing with the experimental data obtained from EXFOR. The cross section value of 235 U(n,2n) 234 U, 235 U(n,3n) 233 U, 235 U(n,γ) 236 U, 235 U(n,f) are obtained using computer code COMPLET and the corresponding experimental values were browsed by EXFOR, IAEA. The theoretical results are compared with the experimental data taken from EXFOR Data Bank. Computer code COMPLET has been used for the analysis with the same set of input parameters and the graphs were plotted by the help of spreadsheet & Origin-8 software. The quantification of uncertainties stemming from both experimental data and computer code calculation plays a significant role in the final evaluated results. The calculated results for total cross sections were compared with the experimental data taken from EXFOR in the literature, and good agreement was found between the experimental and theoretical data. This comparison of the calculated data was analyzed and interpreted with tabulation and graphical descriptions, and the results were briefly discussed within the text of this research work. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Reliability and main findings of the FEES-Tensilon Test in patients with myasthenia gravis and dysphagia.

    PubMed

    Im, Sun; Suntrup-Krueger, Sonja; Colbow, Sigrid; Sauer, Sonja; Claus, Inga; Meuth, Sven G; Dziewas, Rainer; Warnecke, Tobias

    2018-05-26

    Diagnosis of pharyngeal dysphagia caused by myasthenia gravis (MG) based on clinical examination alone is often challenging. Flexible endoscopic evaluation of swallowing (FEES) combined with Tensilon (edrophonium) application, referred to as the FEES-Tensilon Test, was developed to improve diagnostic accuracy and to detect the main symptoms of pharyngeal dysphagia in MG. Here we investigated inter- and intra-rater reliability of the FEES-Tensilon Test and analyzed the main endoscopic findings. Four experienced raters reviewed a total of 20 FEES-Tensilon-Test videos in randomized order. Residue severity was graded at 4 different pharyngeal spaces before and after Tensilon administration. All interpretations were performed twice per rater, 4 weeks apart (a total of 160 scorings). Intra-rater test-retest reliability and inter-rater reliability levels were calculated. The most frequent FEES findings in MG patients before Tensilon application were prominent residues of semi solids spread all over the hypopharynx in varying locations. The reliability level in the interpretation of the FEES-Tensilon test was excellent regardless of the raters' profession or years of experience with FEES. All 4 raters showed high inter- and intra- reliability levels in interpreting the FEES-Tensilon Test based on residue clearance (kappa=0.922, 0.981). Degree of residue normalization in the vallecular space after Tensilon application showed the highest inter- and intra-rater reliability level (kappa=0.863, 0.957) followed by the epiglottis (kappa=0.813, 0.946) and pyriform sinuses (kappa=0.836, 0.929). Interpretation of the FEES-Tensilon Test based on residue severity and degree of Tensilon clearance, especially in the vallecular space, is consistent and reliable. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  4. Evaluation of interobserver variability of parenchymal phase of Tc-99m mercaptoacetyltriglycine and Tc-99m dimercaptosuccinic acid renal scintigraphy

    PubMed Central

    Erdoğan, Zeynep; Abdülrezzak, Ümmühan; Silov, Güler; Özdal, Ayşegül; Turhal, Özgül

    2014-01-01

    Objective: The aim of this study was to investigate the variability in the interpretation of parenchymal abnormalities and to assess the differences in interpretation of routine renal scintigraphic findings on posterior view of technetium-99m dimercaptosuccinic acid (pvDMSA) scans and parenchymal phase of technetium-99m mercaptoacetyltriglycine (ppMAG3) scans by using standard criterions to make standardization and semiquantitative evaluation and to have more accurately correlation. Materials and Methods: Two experienced nuclear medicine physicians independently interpreted pvDMSA scans of 204 and ppMAG3 scans of 102 pediatric patients, retrospectively. Comparisons were made by visual inspection of pvDMSA scans, and ppMAG3 scans by using a grading system modified from Itoh et al. According to this, anatomical damage of the renal parenchyma was classified into six types: Grade 0-V. In the calculation of the agreement rates, Kendall correlation (tau-b) analysis was used. Results: According to our findings, excellent agreement was found for DMSA grade readings (DMSA-GR) (tau-b = 0.827) and good agreement for MAG3 grade readings (MAG3-GR) (tau-b = 0.790) between two observers. Most of clear parenchymal lesions detected on pvDMSA scans and ppMAG3 scans identified by observers equally. Studies with negative or minimal lesions reduced correlation degrees for both DMSA-GR and MAG3-GR. Conclusion: Our grading system can be used for standardization of the reports. We conclude that standardization of criteria and terminology in the interpretations may result in higher interobserver consistency, also improve low interobserver reproducibility and objectivity of renal scintigraphy reports. PMID:24761059

  5. Special Operations Forces Language and Culture Needs Assessment: General Use of Interpreters

    DTIC Science & Technology

    2010-11-04

    strategic use of interpreters. This information can be used to examine and revise policies and everyday practice related to interpreter use, so that SOF...operators’ mission effectiveness can be improved. Examining the current state of interpreter use in the SOF community can highlight important...the other hand, those who received pre-deployment use of interpreter training found it effective. This training can teach SOF operators how to

  6. First principles and experimental study of the electronic structure and phase stability of bulk thallium bromide

    NASA Astrophysics Data System (ADS)

    Smith, Holland M.; Zhou, Yuzhi; Ciampi, Guido; Kim, Hadong; Cirignano, Leonard J.; Shah, Kanai S.; Haller, E. E.; Chrzan, D. C.

    2013-08-01

    We apply state-of-art first principle calculations to study the polymorphism and electronic structure of three previously reported phases of TlBr. The calculated band structures of NaCl-structure phase and orthorhombic-structure phase have different features than that of commonly observed CsCl-structure phase. We further interpret photoluminescence spectra based on our calculations. Several peaks close to calculated band gap values of the NaCl-structure phase and the orthorhombic-structure phase are found in unpolished TlBr samples.

  7. The Mediating Role of Insight for Long-Term Improvements in Psychodynamic Therapy

    ERIC Educational Resources Information Center

    Johansson, Paul; Hoglend, Per; Ulberg, Randi; Amlo, Svein; Marble, Alice; Bogwald, Kjell-Petter; Sorbye, Oystein; Sjaastad, Mary Cosgrove; Heyerdahl, Oscar

    2010-01-01

    Objective: According to psychoanalytic theory, interpretation of transference leads to increased insight that again leads to improved interpersonal functioning over time. In this study, we performed a full mediational analysis to test whether insight gained during treatment mediates the long-term effects of transference interpretation in dynamic…

  8. 42 CFR 480.141 - Disclosure of QIO interpretations on the quality of health care.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION REVIEW INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs) Disclosure of Confidential Information § 480.141 Disclosure of... 42 Public Health 4 2011-10-01 2011-10-01 false Disclosure of QIO interpretations on the quality of...

  9. 42 CFR 480.141 - Disclosure of QIO interpretations on the quality of health care.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... OF HEALTH AND HUMAN SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION INFORMATION Utilization and Quality Control Quality... 42 Public Health 4 2013-10-01 2013-10-01 false Disclosure of QIO interpretations on the quality of...

  10. 42 CFR 480.141 - Disclosure of QIO interpretations on the quality of health care.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... OF HEALTH AND HUMAN SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION INFORMATION Utilization and Quality Control Quality... 42 Public Health 4 2014-10-01 2014-10-01 false Disclosure of QIO interpretations on the quality of...

  11. 42 CFR 480.141 - Disclosure of QIO interpretations on the quality of health care.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION REVIEW INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs) Disclosure of Confidential Information § 480.141 Disclosure of... 42 Public Health 4 2010-10-01 2010-10-01 false Disclosure of QIO interpretations on the quality of...

  12. 42 CFR 480.141 - Disclosure of QIO interpretations on the quality of health care.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... OF HEALTH AND HUMAN SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION INFORMATION Utilization and Quality Control Quality... 42 Public Health 4 2012-10-01 2012-10-01 false Disclosure of QIO interpretations on the quality of...

  13. General airplane performance

    NASA Technical Reports Server (NTRS)

    Rockfeller, W C

    1939-01-01

    Equations have been developed for the analysis of the performance of the ideal airplane, leading to an approximate physical interpretation of the performance problem. The basic sea-level airplane parameters have been generalized to altitude parameters and a new parameter has been introduced and physically interpreted. The performance analysis for actual airplanes has been obtained in terms of the equivalent ideal airplane in order that the charts developed for use in practical calculations will for the most part apply to any type of engine-propeller combination and system of control, the only additional material required consisting of the actual engine and propeller curves for propulsion unit. Finally, a more exact method for the calculation of the climb characteristics for the constant-speed controllable propeller is presented in the appendix.

  14. The Impact of Consecutive Interpreting Training on the L2 Listening Competence Enhancement

    ERIC Educational Resources Information Center

    Zhang, Tongtong; Wu, Zhiwei

    2017-01-01

    In recent years, a growing number of people have taken up interpreting training, with the intention of not only developing interpreting skills, but improving language proficiency as well. The present study sets out to investigate the impact of English-Chinese consecutive interpreting (CI) training on the enhancement of the second language (L2,…

  15. Joint Inversion of 3d Mt/gravity/magnetic at Pisagua Fault.

    NASA Astrophysics Data System (ADS)

    Bascur, J.; Saez, P.; Tapia, R.; Humpire, M.

    2017-12-01

    This work shows the results of a joint inversion at Pisagua Fault using 3D Magnetotellurics (MT), gravity and regional magnetic data. The MT survey has a poor coverage of study area with only 21 stations; however, it allows to detect a low resistivity zone aligned with the Pisagua Fault trace that it is interpreted as a damage zone. The integration of gravity and magnetic data, which have more dense sampling and coverage, adds more detail and resolution to the detected low resistivity structure and helps to improve the structure interpretation using the resulted models (density, magnetic-susceptibility and electrical resistivity). The joint inversion process minimizes a multiple target function which includes the data misfit, model roughness and coupling norms (crossgradient and direct relations) for all geophysical methods considered (MT, gravity and magnetic). This process is solved iteratively using the Gauss-Newton method which updates the model of each geophysical method improving its individual data misfit, model roughness and the coupling with the other geophysical models. For solving the model updates of magnetic and gravity methods were developed dedicated 3D inversion software codes which include the coupling norms with additionals geophysical parameters. The model update of the 3D MT is calculated using an iterative method which sequentially filters the priority model and the output model of a single 3D MT inversion process for obtaining the resistivity model coupled solution with the gravity and magnetic methods.

  16. Abbreviated breast magnetic resonance protocol: Value of high-resolution temporal dynamic sequence to improve lesion characterization.

    PubMed

    Oldrini, Guillaume; Fedida, Benjamin; Poujol, Julie; Felblinger, Jacques; Trop, Isabelle; Henrot, Philippe; Darai, Emile; Thomassin-Naggara, Isabelle

    2017-10-01

    To evaluate the added value of ULTRAFAST-MR sequence to an abbreviated FAST protocol in comparison with FULL protocol to distinguish benign from malignant lesions in a population of women, regardless of breast MR imaging indication. From March 10th to September 22th, 2014, we retrospectively included a total of 70 consecutive patients with 106 histologically proven lesions (58 malignant and 48 benign) who underwent breast MR imaging for preoperative breast staging (n=38), high-risk screening (n=7), problem solving (n=18), and nipple discharge (n=4) with 12 time resolved imaging of contrast kinetics (TRICKS) acquisitions during contrast inflow interleaved in a regular high-resolution dynamic MRI protocol (FULL protocol). Two readers scored MR exams as either positive or negative and described significant lesions according to Bi-RADS lexicon with a TRICKS images (ULTRAFAST), an abbreviated protocol (FAST) and all images (FULL protocol). Sensitivity, specificity, positive and negative predictive values, and accuracy were calculated for each protocol and compared with McNemar's test. For all readers, the combined FAST-ULTRAFAST protocol significantly improved the reading with a specificity of 83.3% and 70.8% in comparison with FAST protocol or FULL protocol, respectively, without change in sensitivity. By adding ULTRAFAST protocol to FAST protocol, readers 1 and 2 were able to correctly change the diagnosis in 22.9% (11/48) and 10.4% (5/48) of benign lesions, without missing any malignancy, respectively. Both interpretation and image acquisition times for combined FAST-ULTRAFAST protocol and FAST protocol were shorter compared to FULL protocol (p<0.001). Compared to FULL protocol, adding ULTRAFAST to FAST protocol improves specificity, mainly in correctly reclassifying benign masses and reducing interpretation and acquisition time, without decreasing sensitivity. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Characterization And Partitioning Of CH4 And CO2 Eddy Flux Data Measured at NGEE-Arctic Sites

    NASA Astrophysics Data System (ADS)

    Dengel, S.; Chafe, O.; Curtis, J. B.; Biraud, S.; Torn, M. S.; Wullschleger, S. D.

    2017-12-01

    The high latitudes are experiencing rapid warming with permafrost ecosystems being highly vulnerable to this change. Since the advancement in Eddy Covariance (EC) measurements, the number of high latitude sites measuring greenhouse gases and energy (CO2, CH4 and H2O) fluxes is steadily increasing, with new sites being established each year. Data from these sites are not only valuable for annual carbon budget calculations, but also vital to the modeling community for improving their predictions of emission rates and trends. CH4 flux measurements are not as straightforward as CO2 fluxes. They tend to be less predictable or as easily interpretable as CO2 fluxes. Understanding CH4 emission patterns are often challenging. Moreover, gas flux fluctuations are spatially and temporally diverse, and in many cases event-based. An improvement in understanding would also contribute to improvements in the fidelity of model predictions. These rely on having high quality data, and thus will entail developing new QA/QC and gap-filling methods for Arctic systems, in particularly for CH4. Contributing to these challenges is the limited number of ancillary measurements carried out at many sites and the lack of standardized data processing, QA/QC, and gap-filling procedures, in particular for CH4. CO2, CH4, and energy flux measurements are ongoing at, both NGEE-Arctic/AmeriFlux, US-NGB (Arctic coastal plain), and US-NGC (subarctic tussock tundra) sites. The sites, with underlying continuous permafrost, show a high degree of inter-annual and seasonal variability in CH4 fluxes. In order to interpret this variability, we apply a variety of models, such as footprint characterization, generalized additive models, as well as artificial neural networks, in an attempt to decipher these diverse fluxes, patterns and events.

  18. Elimination of scattered gamma rays from injection sites using upper offset energy windows in sentinel lymph node scintigraphy.

    PubMed

    Yoneyama, Hiroto; Tsushima, Hiroyuki; Onoguchi, Masahisa; Konishi, Takahiro; Nakajima, Kenichi; Kinuya, Seigo

    2015-05-01

    The identification of sentinel lymph nodes (SLNs) near injection sites is difficult because of scattered gamma rays. The purpose of this study was to investigate the optimal energy windows for elimination of scattered gamma rays in order to improve the detection of SLNs. The clinical study group consisted of 56 female patients with breast cancer. While the energy was centred at 140 keV with a 20% window for Tc-99m, this energy window was divided into five subwindows with every 4% in planar imaging. Regions of interest were placed on SLNs and the background, and contrast was calculated using a standard equation. The confidence levels of interpretations were evaluated using a five-grade scale. The contrast provided by 145.6 keV±2% was the best, followed by 140 keV±2%, 151.2 keV±2%, 134.4 keV±2% and 128.8 keV±2% in that order. When 128.8 keV±2% and 134.4 keV±2% were eliminated from 140 keV±10% (145.6 keV±6%), the contrast of SLNs improved significantly. The confidence levels of interpretation and detection rate provided by the planar images with 140 keV±10% were 4.74±0.58 and 94.8%, respectively, and those provided by 145.6 keV±6% were 4.94±0.20 and 100%. Because lower energy windows contain many scattered gamma rays, upper offset energy windows, which exclude lower energy windows, improve the image contrast of SLNs near injection sites.

  19. A two-factor theory for concussion assessment using ImPACT: memory and speed.

    PubMed

    Schatz, Philip; Maerlender, Arthur

    2013-12-01

    We present the initial validation of a two-factor structure of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) using ImPACT composite scores and document the reliability and validity of this factor structure. Factor analyses were conducted for baseline (N = 21,537) and post-concussion (N = 560) data, yielding "Memory" (Verbal and Visual) and "Speed" (Visual Motor Speed and Reaction Time) Factors; inclusion of Total Symptom Scores resulted in a third discrete factor. Speed and Memory z-scores were calculated, and test-retest reliability (using intra-class correlation coefficients) at 1 month (0.88/0.81), 1 year (0.85/0.75), and 2 years (0.76/0.74) were higher than published data using Composite scores. Speed and Memory scores yielded 89% sensitivity and 70% specificity, which was higher than composites (80%/62%) and comparable with subscales (91%/69%). This emergent two-factor structure has improved test-retest reliability with no loss of sensitivity/specificity and may improve understanding and interpretability of ImPACT test results.

  20. The Productivity Dilemma in Workplace Health Promotion

    PubMed Central

    Cherniack, Martin

    2015-01-01

    Background. Worksite-based programs to improve workforce health and well-being (Workplace Health Promotion (WHP)) have been advanced as conduits for improved worker productivity and decreased health care costs. There has been a countervailing health economics contention that return on investment (ROI) does not merit preventive health investment. Methods/Procedures. Pertinent studies were reviewed and results reconsidered. A simple economic model is presented based on conventional and alternate assumptions used in cost benefit analysis (CBA), such as discounting and negative value. The issues are presented in the format of 3 conceptual dilemmas. Principal Findings. In some occupations such as nursing, the utility of patient survival and staff health is undervalued. WHP may miss important components of work related health risk. Altering assumptions on discounting and eliminating the drag of negative value radically change the CBA value. Significance. Simple monetization of a work life and calculation of return on workforce health investment as a simple alternate opportunity involve highly selective interpretations of productivity and utility. PMID:26380374

  1. X-ray absorption spectroscopy of LiBF 4 in propylene carbonate. A model lithium ion battery electrolyte

    DOE PAGES

    Smith, Jacob W.; Lam, Royce K.; Sheardy, Alex T.; ...

    2014-08-20

    Since their introduction into the commercial marketplace in 1991, lithium ion batteries have become increasingly ubiquitous in portable technology. Nevertheless, improvements to existing battery technology are necessary to expand their utility for larger-scale applications, such as electric vehicles. Advances may be realized from improvements to the liquid electrolyte; however, current understanding of the liquid structure and properties remains incomplete. X-ray absorption spectroscopy of solutions of LiBF 4 in propylene carbonate (PC), interpreted using first-principles electronic structure calculations within the eXcited electron and Core Hole (XCH) approximation, yields new insight into the solvation structure of the Li + ion in thismore » model electrolyte. By generating linear combinations of the computed spectra of Li +-associating and free PC molecules and comparing to the experimental spectrum, we find a Li +–solvent interaction number of 4.5. This result suggests that computational models of lithium ion battery electrolytes should move beyond tetrahedral coordination structures.« less

  2. Inserting Thienyl Linkers into Conjugated Molecules for Efficient Multilevel Electronic Memory: A New Understanding of Charge-Trapping in Organic Materials.

    PubMed

    Li, Yang; Li, Hua; He, Jinghui; Xu, Qingfeng; Li, Najun; Chen, Dongyun; Lu, Jianmei

    2016-03-18

    The practical application of organic memory devices requires low power consumption and reliable device quality. Herein, we report that inserting thienyl units into D-π-A molecules can improve these parameters by tuning the texture of the film. Theoretical calculations revealed that introducing thienyl π bridges increased the planarity of the molecular backbone and extended the D-A conjugation. Thus, molecules with more thienyl spacers showed improved stacking and orientation in the film state relative to the substrates. The corresponding sandwiched memory devices showed enhanced ternary memory behavior, with lower threshold voltages and better repeatability. The conductive switching and variation in the performance of the memory devices were interpreted by using an extended-charge-trapping mechanism. Our study suggests that judicious molecular engineering can facilitate control of the orientation of the crystallite in the solid state to achieve superior multilevel memory performance. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Introducing the Concept of the Minimally Important Difference to Determine a Clinically Relevant Change on Patient-Reported Outcome Measures in Patients with Intermittent Claudication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conijn, Anne P., E-mail: a.p.conijn@amc.nl; Jonkers, Wilma, E-mail: wilma.jonkers@achmea.nl; Rouwet, Ellen V., E-mail: e.rouwet@erasmusmc.nl

    PurposeThe minimally important difference (MID) represents the smallest change in score on patient-reported outcome measures that is relevant to patients. The aim of this study was to introduce the MID for the Vascular Quality of Life Questionnaire (VascuQol) and the walking impairment questionnaire (WIQ) for patients with intermittent claudication (IC).MethodsIn this multicenter study, we recruited 294 patients with IC between July and October 2012. Patients completed the VascuQol, with scores ranging from 1 to 7 (worst to best), and the WIQ, with scores ranging from 0 to 1 (worst to best) at first visit and after 4 months follow-up. Inmore » addition, patients answered an anchor-question rating their health status compared to baseline, as being improved, unchanged, or deteriorated. The MID for improvement and deterioration was calculated by an anchor-based approach, and determined with the upper and lower limits of the 95 % confidence interval of the mean change of the group who had not changed according to the anchor-question.ResultsFor the MID analyses of the VascuQol and WIQ, 163 and 134 patients were included, respectively. The MID values for the VascuQol (mean baseline score 4.25) were 0.87 for improvement and 0.23 for deterioration. For the WIQ (mean baseline score 0.39), we found MID values of 0.11 and −0.03 for improvement and deterioration, respectively.ConclusionIn this study, we calculated the MID for the VascuQol and the WIQ. Applying these MID facilitates better interpretation of treatment outcomes and can help to set treatment goals for individual care.« less

  4. B → Dℓν form factors at nonzero recoil and |V cb| from 2+1-flavor lattice QCD

    DOE PAGES

    Bailey, Jon A.

    2015-08-10

    We present the first unquenched lattice-QCD calculation of the hadronic form factors for the exclusive decay B¯→Dℓν¯ at nonzero recoil. We carry out numerical simulations on 14 ensembles of gauge-field configurations generated with 2+1 flavors of asqtad-improved staggered sea quarks. The ensembles encompass a wide range of lattice spacings (approximately 0.045 to 0.12 fm) and ratios of light (up and down) to strange sea-quark masses ranging from 0.05 to 0.4. For the b and c valence quarks we use improved Wilson fermions with the Fermilab interpretation, while for the light valence quarks we use asqtad-improved staggered fermions. We extrapolate ourmore » results to the physical point using rooted staggered heavy-light meson chiral perturbation theory. We then parametrize the form factors and extend them to the full kinematic range using model-independent functions based on analyticity and unitarity. We present our final results for f +(q 2) and f 0(q 2), including statistical and systematic errors, as coefficients of a series in the variable z and the covariance matrix between these coefficients. We then fit the lattice form-factor data jointly with the experimentally measured differential decay rate from BABAR to determine the CKM matrix element, |V cb|=(39.6 ± 1.7 QCD+exp ± 0.2 QED) × 10 –3. As a byproduct of the joint fit we obtain the form factors with improved precision at large recoil. In conclusion, we use them to update our calculation of the ratio R(D) in the Standard Model, which yields R(D)=0.299(11).« less

  5. Structured reporting platform improves CAD-RADS assessment.

    PubMed

    Szilveszter, Bálint; Kolossváry, Márton; Karády, Júlia; Jermendy, Ádám L; Károlyi, Mihály; Panajotu, Alexisz; Bagyura, Zsolt; Vecsey-Nagy, Milán; Cury, Ricardo C; Leipsic, Jonathon A; Merkely, Béla; Maurovich-Horvat, Pál

    2017-11-01

    Structured reporting in cardiac imaging is strongly encouraged to improve quality through consistency. The Coronary Artery Disease - Reporting and Data System (CAD-RADS) was recently introduced to facilitate interdisciplinary communication of coronary CT angiography (CTA) results. We aimed to assess the agreement between manual and automated CAD-RADS classification using a structured reporting platform. Five readers prospectively interpreted 500 coronary CT angiographies using a structured reporting platform that automatically calculates the CAD-RADS score based on stenosis and plaque parameters manually entered by the reader. In addition, all readers manually assessed CAD-RADS blinded to the automatically derived results, which was used as the reference standard. We evaluated factors influencing reader performance including CAD-RADS training, clinical load, time of the day and level of expertise. Total agreement between manual and automated classification was 80.2%. Agreement in stenosis categories was 86.7%, whereas the agreement in modifiers was 95.8% for "N", 96.8% for "S", 95.6% for "V" and 99.4% for "G". Agreement for V improved after CAD-RADS training (p = 0.047). Time of the day and clinical load did not influence reader performance (p > 0.05 both). Less experienced readers had a higher total agreement as compared to more experienced readers (87.0% vs 78.0%, respectively; p = 0.011). Even though automated CAD-RADS classification uses data filled in by the readers, it outperforms manual classification by preventing human errors. Structured reporting platforms with automated calculation of the CAD-RADS score might improve data quality and support standardization of clinical decision making. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Spectroscopic and molecular structure investigation of 2-furanacrylic acid monomer and dimer using HF and DFT methods

    NASA Astrophysics Data System (ADS)

    Ghalla, H.; Issaoui, N.; Govindarajan, M.; Flakus, H. T.; Jamroz, M. H.; Oujia, B.

    2014-02-01

    In the present work, we reported a combined experimental and theoretical study on molecular structure and vibrational spectra of 2-furanacrylic acid (abbreviated as 2FAA). The FT-IR and FT-Raman spectra of 2FAA have been recorded in the regions 4000-400 and 4000-100 cm-1. The spectra were interpreted in terms of fundamentals modes, combination and overtone bands. The monomer and dimer structures of the title molecule have been obtained from Hartree-Fock (HF) and density functional theory (DFT) B3LYP methods with 6-311++G(d,p) as basis set calculations. The vibrational frequencies were calculated by DFT method and compared with the experimental frequencies, which yield good agreement between observed and calculated frequencies. Intermolecular OH⋯O hydrogen bonds are discussed in dimer structure of the molecule. The infrared and Raman spectra were also predicted from the calculated intensities. The polarizability and first order hyperpolarizabilty of the title molecule were calculated and interpreted. A study on the electronic properties, such as excitation energies, oscillator strength, wavelengths, HOMO and LUMO energies, are performed by time-dependent DFT (TD-DFT) approach. In addition, Milliken atomic charges, possible charge transfer, natural bond orbital (NBO) and AIM topological analysis were performed. Moreover, molecular electrostatic potential (MEP) and the thermodynamic properties (heat capacity, entropy, and enthalpy) of the title compound at different temperatures were calculated in gas phase.

  7. Nonlinear Rayleigh wave inversion based on the shuffled frog-leaping algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Cheng-Yu; Wang, Yan-Yan; Wu, Dun-Shi; Qin, Xiao-Jun

    2017-12-01

    At present, near-surface shear wave velocities are mainly calculated through Rayleigh wave dispersion-curve inversions in engineering surface investigations, but the required calculations pose a highly nonlinear global optimization problem. In order to alleviate the risk of falling into a local optimal solution, this paper introduces a new global optimization method, the shuffle frog-leaping algorithm (SFLA), into the Rayleigh wave dispersion-curve inversion process. SFLA is a swarm-intelligence-based algorithm that simulates a group of frogs searching for food. It uses a few parameters, achieves rapid convergence, and is capability of effective global searching. In order to test the reliability and calculation performance of SFLA, noise-free and noisy synthetic datasets were inverted. We conducted a comparative analysis with other established algorithms using the noise-free dataset, and then tested the ability of SFLA to cope with data noise. Finally, we inverted a real-world example to examine the applicability of SFLA. Results from both synthetic and field data demonstrated the effectiveness of SFLA in the interpretation of Rayleigh wave dispersion curves. We found that SFLA is superior to the established methods in terms of both reliability and computational efficiency, so it offers great potential to improve our ability to solve geophysical inversion problems.

  8. Theoretical investigation of the structural stabilities, optoelectronic properties and thermodynamic characteristics of GaPxSb1-x ternary alloys

    NASA Astrophysics Data System (ADS)

    Oumelaz, F.; Nemiri, O.; Boumaza, A.; Ghemid, S.; Meradji, H.; Bin Omran, S.; El Haj Hassan, F.; Rai, D. P.; Khenata, R.

    2018-06-01

    In this theoretical study, we have investigated the structural, phase transition, electronic, thermodynamic and optical properties of GaPxSb1-x ternary alloys. Our calculations are performed with the WIEN2k code based on density functional theory using the full-potential linearized augmented plane wave method. For the electron exchange-correlation potential, a generalized gradient approximation within Wu-Cohen scheme is considered. The recently developed Tran-Blaha modified Becke-Johnson potential has also been used to improve the underestimated band gap. The structural properties, including the lattice constants, the bulk moduli and their pressure derivatives are in very good agreement with the available experimental data and theoretical results. Several structural phase transitions were studied here to establish the stable structure and to predict the phase transition under hydrostatic pressure. The computed transition pressure (Pt) of the material of our interest from the zinc blende (B3) to the rock salt (B1) phase has been determined and found to agree well with the experimental and theoretical data. The calculated band structure shows that GaSb binary compound and the ternary alloys are direct band gap semiconductors. Optical parameters such as the dielectric constants and the refractive indices are calculated and analyzed. The thermodynamic results are also interpreted and analyzed.

  9. Design and implementation of a random neural network routing engine.

    PubMed

    Kocak, T; Seeber, J; Terzioglu, H

    2003-01-01

    Random neural network (RNN) is an analytically tractable spiked neural network model that has been implemented in software for a wide range of applications for over a decade. This paper presents the hardware implementation of the RNN model. Recently, cognitive packet networks (CPN) is proposed as an alternative packet network architecture where there is no routing table, instead the RNN based reinforcement learning is used to route packets. Particularly, we describe implementation details for the RNN based routing engine of a CPN network processor chip: the smart packet processor (SPP). The SPP is a dual port device that stores, modifies, and interprets the defining characteristics of multiple RNN models. In addition to hardware design improvements over the software implementation such as the dual access memory, output calculation step, and reduced output calculation module, this paper introduces a major modification to the reinforcement learning algorithm used in the original CPN specification such that the number of weight terms are reduced from 2n/sup 2/ to 2n. This not only yields significant memory savings, but it also simplifies the calculations for the steady state probabilities (neuron outputs in RNN). Simulations have been conducted to confirm the proper functionality for the isolated SPP design as well as for the multiple SPP's in a networked environment.

  10. Rowan Gorilla I rigged up, heads for eastern Canada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1984-03-01

    Designed to operate in very hostile offshore environments, the first of the Rowan Gorilla class of self-elevating drilling rigs has been towed to its drilling assignment offshore Nova Scotia. About 40% larger than other jackups, these rigs can operate in 300 ft of water, drilling holes as deep as 30,000 ft. They also feature unique high-pressure and solids control systems that are expected to improve drilling procedures and efficiencies. A quantitative formation pressure evaluation program for the Hewlett-Packard HP-41 handheld calculator computes formation pressures by three independent methods - the corrected d exponent, Bourgoyne and Young, and normalized penetration ratemore » techniques for abnormal pressure detection and computation. Based on empirically derived drilling rate equations, each of the methods can be calculated separately, without being dependent on or influenced by the results or stored data from the other two subprograms. The quantitative interpretation procedure involves establishing a normal drilling rate trend and calculating the pore pressure from the magnitude of the drilling rate trend or plotting parameter increases above the trend line. Mobil's quick, accurate program could aid drilling operators in selecting the casing point, minimizing differential sticking, maintaining the proper mud weights to avoid kicks and lost circulation, and maximizing penetration rates.« less

  11. Teaching crucial skills: An electrocardiogram teaching module for medical students.

    PubMed

    Chudgar, Saumil M; Engle, Deborah L; Grochowski, Colleen O'Connor; Gagliardi, Jane P

    2016-01-01

    Medical student performance in electrocardiogram (ECG) interpretation at our institution could be improved. Varied resources exist to teach students this essential skill. We created an ECG teaching module (ECGTM) of 75 cases representing 15 diagnoses to improve medical students' performance and confidence in ECG interpretation. Students underwent pre- and post-clerkship testing to assess ECG interpretation skills and confidence and also end-of-clinical-year testing in ECG and laboratory interpretation. Performance was compared for the years before and during ECGTM availability. Eighty-four percent of students (total n=101) reported using the ECGTM; 98% of those who used it reported it was useful. Students' performance and confidence were higher on the post-test. Students with access to the ECGTM (n=101) performed significantly better than students from the previous year (n=90) on the end-of-year ECG test. The continuous availability of an ECGTM was associated with improved confidence and ability in ECG interpretation. The ECGTM may be another available tool to help students as they learn to read ECGs. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Interpretation of the results of statistical measurements. [search for basic probability model

    NASA Technical Reports Server (NTRS)

    Olshevskiy, V. V.

    1973-01-01

    For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.

  13. Secondary Interpretation of CT Examinations: Frequency and Payment in the Medicare Fee-for-Service Population.

    PubMed

    Lu, Michael T; Hallett, Travis R; Hemingway, Jennifer; Hughes, Danny R; Hoffmann, Udo; Duszak, Richard

    2016-09-01

    Secondary interpretation of diagnostic imaging examinations (providing a second formal interpretation for imaging performed at another institution) may reduce repeat imaging after transfer of care. Recently, CMS requested information to guide payment policy. We aimed to study historic trends in submitted claims and payments for secondary interpretation services in the Medicare fee-for-service population. Applying current procedural terminology codes by body part to Medicare Part B aggregate claims files, we identified all CT interpretation services rendered between 1999 and 2012. Secondary interpretation services were identified using combined code modifiers 26 and 77, in accordance with CMS billing guidelines. The frequencies of billed and denied services were extracted for primary and secondary CT interpretation services. Primary versus secondary interpretation denial rates were calculated and compared. Of all 227 million Medicare Part B claims for CT services, 299,468 (0.13%) were for secondary interpretation services. From 1999 to 2012, growth in secondary interpretation claims outpaced that in primary interpretation claims (+811% versus +56%; compound annual growth rate 17% versus 3.2%). As a percentage of all services, secondary interpretations increased from 0.05% in 1999 to 0.30% in 2012. Denial rates for second interpretations decreased from 1999 to 2012 (12.7% to 7.0%), and now approach those for primary interpretations (5.4% in 2012). Medicare claims for secondary interpretation of CT examinations are growing but account for less than 1% of all billed CT interpretation services. Denial rates are similar to those of primary interpretation services. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  14. Prescriptive models to support decision making in genetics.

    PubMed

    Pauker, S G; Pauker, S P

    1987-01-01

    Formal prescriptive models can help patients and clinicians better understand the risks and uncertainties they face and better formulate well-reasoned decisions. Using Bayes rule, the clinician can interpret pedigrees, historical data, physical findings and laboratory data, providing individualized probabilities of various diagnoses and outcomes of pregnancy. With the advent of screening programs for genetic disease, it becomes increasingly important to consider the prior probabilities of disease when interpreting an abnormal screening test result. Decision trees provide a convenient formalism for structuring diagnostic, therapeutic and reproductive decisions; such trees can also enhance communication between clinicians and patients. Utility theory provides a mechanism for patients to understand the choices they face and to communicate their attitudes about potential reproductive outcomes in a manner which encourages the integration of those attitudes into appropriate decisions. Using a decision tree, the relevant probabilities and the patients' utilities, physicians can estimate the relative worth of various medical and reproductive options by calculating the expected utility of each. By performing relevant sensitivity analyses, clinicians and patients can understand the impact of various soft data, including the patients' attitudes toward various health outcomes, on the decision making process. Formal clinical decision analytic models can provide deeper understanding and improved decision making in clinical genetics.

  15. Inter-Rater Reliability and Downstream Financial Implications of Electrocardiography Screening in Young Athletes.

    PubMed

    Dhutia, Harshil; Malhotra, Aneil; Yeo, Tee Joo; Ster, Irina Chis; Gabus, Vincent; Steriotis, Alexandros; Dores, Helder; Mellor, Greg; García-Corrales, Carmen; Ensam, Bode; Jayalapan, Viknesh; Ezzat, Vivienne Anne; Finocchiaro, Gherardo; Gati, Sabiha; Papadakis, Michael; Tome-Esteban, Maria; Sharma, Sanjay

    2017-08-01

    Preparticipation screening for cardiovascular disease in young athletes with electrocardiography is endorsed by the European Society of Cardiology and several major sporting organizations. One of the concerns of the ECG as a screening test in young athletes relates to the potential for variation in interpretation. We investigated the degree of variation in ECG interpretation in athletes and its financial impact among cardiologists of differing experience. Eight cardiologists (4 with experience in screening athletes) each reported 400 ECGs of consecutively screened young athletes according to the 2010 European Society of Cardiology recommendations, Seattle criteria, and refined criteria. Cohen κ coefficient was used to calculate interobserver reliability. Cardiologists proposed secondary investigations after ECG interpretation, the costs of which were based on the UK National Health Service tariffs. Inexperienced cardiologists were more likely to classify an ECG as abnormal compared with experienced cardiologists (odds ratio, 1.44; 95% confidence interval, 1.03-2.02). Modification of ECG interpretation criteria improved interobserver reliability for categorizing an ECG as abnormal from poor (2010 European Society of Cardiology recommendations; κ=0.15) to moderate (refined criteria; κ=0.41) among inexperienced cardiologists; however, interobserver reliability was moderate for all 3 criteria among experienced cardiologists (κ=0.40-0.53). Inexperienced cardiologists were more likely to refer athletes for further evaluation compared with experienced cardiologists (odds ratio, 4.74; 95% confidence interval, 3.50-6.43) with poorer interobserver reliability (κ=0.22 versus κ=0.47). Interobserver reliability for secondary investigations after ECG interpretation ranged from poor to fair among inexperienced cardiologists (κ=0.15-0.30) and fair to moderate among experienced cardiologists (κ=0.21-0.46). The cost of cardiovascular evaluation per athlete was $175 (95% confidence interval, $142-$228) and $101 (95% confidence interval, $83-$131) for inexperienced and experienced cardiologists, respectively. Interpretation of the ECG in athletes and the resultant cascade of investigations are highly physician dependent even in experienced hands with important downstream financial implications, emphasizing the need for formal training and standardized diagnostic pathways. © 2017 American Heart Association, Inc.

  16. Vibrational spectra, UV and NMR, first order hyperpolarizability and HOMO-LUMO analysis of 2-amino-4-chloro-6-methylpyrimidine.

    PubMed

    Jayavarthanan, T; Sundaraganesan, N; Karabacak, M; Cinar, M; Kurt, M

    2012-11-01

    The solid phase FTIR and FT-Raman spectra of 2-amino-4-chloro-6-methylpyrimidine (2A4Cl6MP) have been recorded in the regions 400-4000 and 50-4,000 cm(-1), respectively. The spectra have been interpreted interms of fundamentals modes, combination and overtone bands. The structure of the molecule has been optimized and the structural characteristics have been determined by density functional theory (B3LYP) method with 6-311++G(d,p) as basis set. The vibrational frequencies were calculated and were compared with the experimental frequencies, which yield good agreement between observed and calculated frequencies. The infrared and Raman spectra have also been predicted from the calculated intensities. (1)H and (13)C NMR spectra were recorded and (1)H and (13)C nuclear magnetic resonance chemical shifts of the molecule were calculated using the gauge independent atomic orbital (GIAO) method. UV-Vis spectrum of the compound was recorded in the region 200-400 nm and the electronic properties HOMO and LUMO energies were measured by time-dependent TD-DFT approach. Nonlinear optical and thermodynamic properties were interpreted. All the calculated results were compared with the available experimental data of the title molecule. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Cantilever testing of sintered-silver interconnects

    DOE PAGES

    Wereszczak, Andrew A.; Chen, Branndon R.; Jadaan, Osama M.; ...

    2017-10-19

    Cantilever testing is an underutilized test method from which results and interpretations promote greater understanding of the tensile and shear failure responses of interconnects, metallizations, or bonded joints. The use and analysis of this method were pursued through the mechanical testing of sintered-silver interconnects that joined Ni/Au-plated copper pillars or Ti/Ni/Ag-plated silicon pillars to Ag-plated direct bonded copper substrates. Sintered-silver was chosen as the interconnect test medium because of its high electrical and thermal conductivities and high-temperature capability—attractive characteristics for a candidate interconnect in power electronic components and other devices. Deep beam theory was used to improve upon the estimationsmore » of the tensile and shear stresses calculated from classical beam theory. The failure stresses of the sintered-silver interconnects were observed to be dependent on test-condition and test-material-system. In conclusion, the experimental simplicity of cantilever testing, and the ability to analytically calculate tensile and shear stresses at failure, result in it being an attractive mechanical test method to evaluate the failure response of interconnects.« less

  18. Cantilever testing of sintered-silver interconnects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wereszczak, Andrew A.; Chen, Branndon R.; Jadaan, Osama M.

    Cantilever testing is an underutilized test method from which results and interpretations promote greater understanding of the tensile and shear failure responses of interconnects, metallizations, or bonded joints. The use and analysis of this method were pursued through the mechanical testing of sintered-silver interconnects that joined Ni/Au-plated copper pillars or Ti/Ni/Ag-plated silicon pillars to Ag-plated direct bonded copper substrates. Sintered-silver was chosen as the interconnect test medium because of its high electrical and thermal conductivities and high-temperature capability—attractive characteristics for a candidate interconnect in power electronic components and other devices. Deep beam theory was used to improve upon the estimationsmore » of the tensile and shear stresses calculated from classical beam theory. The failure stresses of the sintered-silver interconnects were observed to be dependent on test-condition and test-material-system. In conclusion, the experimental simplicity of cantilever testing, and the ability to analytically calculate tensile and shear stresses at failure, result in it being an attractive mechanical test method to evaluate the failure response of interconnects.« less

  19. Do Professional Interpreters Improve Clinical Care for Patients with Limited English Proficiency? A Systematic Review of the Literature

    PubMed Central

    Karliner, Leah S; Jacobs, Elizabeth A; Chen, Alice Hm; Mutha, Sunita

    2007-01-01

    Objective To determine if professional medical interpreters have a positive impact on clinical care for limited English proficiency (LEP) patients. Data Sources A systematic literature search, limited to the English language, in PubMed and PsycINFO for publications between 1966 and September 2005, and a search of the Cochrane Library. Study Design Any peer-reviewed article which compared at least two language groups, and contained data about professional medical interpreters and addressed communication (errors and comprehension), utilization, clinical outcomes, or satisfaction were included. Of 3,698 references, 28 were found by multiple reviewers to meet inclusion criteria and, of these, 21 assessed professional interpreters separately from ad hoc interpreters. Data were abstracted from each article by two reviewers. Data were collected on the study design, size, comparison groups, analytic technique, interpreter training, and method of determining the participants' need for an interpreter. Each study was evaluated for the effect of interpreter use on four clinical topics that were most likely to either impact or reflect disparities in health and health care. Principal Findings In all four areas examined, use of professional interpreters is associated with improved clinical care more than is use of ad hoc interpreters, and professional interpreters appear to raise the quality of clinical care for LEP patients to approach or equal that for patients without language barriers. Conclusions Published studies report positive benefits of professional interpreters on communication (errors and comprehension), utilization, clinical outcomes and satisfaction with care. PMID:17362215

  20. The effect of a chest imaging lecture on emergency department doctors' ability to interpret chest CT images: a randomized study.

    PubMed

    Keijzers, Gerben; Sithirasenan, Vasugi

    2012-02-01

    To assess the chest computed tomography (CT) imaging interpreting skills of emergency department (ED) doctors and to study the effect of a CT chest imaging interpretation lecture on these skills. Sixty doctors in two EDs were randomized, using computerized randomization, to either attend a chest CT interpretation lecture or not to attend this lecture. Within 2 weeks of the lecture, the participants completed a questionnaire on demographic variables, anatomical knowledge, and diagnostic interpretation of 10 chest CT studies. Outcome measures included anatomical knowledge score, diagnosis score, and the combined overall score, all expressed as a percentage of correctly answered questions (0-100). Data on 58 doctors were analyzed, of which 27 were randomized to attend the lecture. The CT interpretation lecture did not have an effect on anatomy knowledge scores (72.9 vs. 70.2%), diagnosis scores (71.2 vs. 69.2%), or overall scores (71.4 vs. 69.5%). Twenty-nine percent of doctors stated that they had a systematic approach to chest CT interpretation. Overall self-perceived competency for interpreting CT imaging (brain, chest, abdomen) was low (between 3.2 and 5.2 on a 10-point Visual Analogue Scale). A single chest CT interpretation lecture did not improve chest CT interpretation by ED doctors. Less than one-third of doctors had a systematic approach to chest CT interpretation. A standardized systematic approach may improve interpretation skills.

  1. On the relation between correlation dimension, approximate entropy and sample entropy parameters, and a fast algorithm for their calculation

    NASA Astrophysics Data System (ADS)

    Zurek, Sebastian; Guzik, Przemyslaw; Pawlak, Sebastian; Kosmider, Marcin; Piskorski, Jaroslaw

    2012-12-01

    We explore the relation between correlation dimension, approximate entropy and sample entropy parameters, which are commonly used in nonlinear systems analysis. Using theoretical considerations we identify the points which are shared by all these complexity algorithms and show explicitly that the above parameters are intimately connected and mutually interdependent. A new geometrical interpretation of sample entropy and correlation dimension is provided and the consequences for the interpretation of sample entropy, its relative consistency and some of the algorithms for parameter selection for this quantity are discussed. To get an exact algorithmic relation between the three parameters we construct a very fast algorithm for simultaneous calculations of the above, which uses the full time series as the source of templates, rather than the usual 10%. This algorithm can be used in medical applications of complexity theory, as it can calculate all three parameters for a realistic recording of 104 points within minutes with the use of an average notebook computer.

  2. The Electronic Spectrum of Iodine Revisited.

    ERIC Educational Resources Information Center

    McNaught, Ian J.

    1980-01-01

    Presents equations and techniques for calculating and interpreting many of the spectroscopically important parameters associated with the ground and second excited states of the iodine molecule. (Author/CS)

  3. Unlocking interpretation in near infrared multivariate calibrations by orthogonal partial least squares.

    PubMed

    Stenlund, Hans; Johansson, Erik; Gottfries, Johan; Trygg, Johan

    2009-01-01

    Near infrared spectroscopy (NIR) was developed primarily for applications such as the quantitative determination of nutrients in the agricultural and food industries. Examples include the determination of water, protein, and fat within complex samples such as grain and milk. Because of its useful properties, NIR analysis has spread to other areas such as chemistry and pharmaceutical production. NIR spectra consist of infrared overtones and combinations thereof, making interpretation of the results complicated. It can be very difficult to assign peaks to known constituents in the sample. Thus, multivariate analysis (MVA) has been crucial in translating spectral data into information, mainly for predictive purposes. Orthogonal partial least squares (OPLS), a new MVA method, has prediction and modeling properties similar to those of other MVA techniques, e.g., partial least squares (PLS), a method with a long history of use for the analysis of NIR data. OPLS provides an intrinsic algorithmic improvement for the interpretation of NIR data. In this report, four sets of NIR data were analyzed to demonstrate the improved interpretation provided by OPLS. The first two sets included simulated data to demonstrate the overall principles; the third set comprised a statistically replicated design of experiments (DoE), to demonstrate how instrumental difference could be accurately visualized and correctly attributed to Wood's anomaly phenomena; the fourth set was chosen to challenge the MVA by using data relating to powder mixing, a crucial step in the pharmaceutical industry prior to tabletting. Improved interpretation by OPLS was demonstrated for all four examples, as compared to alternative MVA approaches. It is expected that OPLS will be used mostly in applications where improved interpretation is crucial; one such area is process analytical technology (PAT). PAT involves fewer independent samples, i.e., batches, than would be associated with agricultural applications; in addition, the Food and Drug Administration (FDA) demands "process understanding" in PAT. Both these issues make OPLS the ideal tool for a multitude of NIR calibrations. In conclusion, OPLS leads to better interpretation of spectrometry data (e.g., NIR) and improved understanding facilitates cross-scientific communication. Such improved knowledge will decrease risk, with respect to both accuracy and precision, when using NIR for PAT applications.

  4. MetaboAnalyst 3.0--making metabolomics more meaningful.

    PubMed

    Xia, Jianguo; Sinelnikov, Igor V; Han, Beomsoo; Wishart, David S

    2015-07-01

    MetaboAnalyst (www.metaboanalyst.ca) is a web server designed to permit comprehensive metabolomic data analysis, visualization and interpretation. It supports a wide range of complex statistical calculations and high quality graphical rendering functions that require significant computational resources. First introduced in 2009, MetaboAnalyst has experienced more than a 50X growth in user traffic (>50 000 jobs processed each month). In order to keep up with the rapidly increasing computational demands and a growing number of requests to support translational and systems biology applications, we performed a substantial rewrite and major feature upgrade of the server. The result is MetaboAnalyst 3.0. By completely re-implementing the MetaboAnalyst suite using the latest web framework technologies, we have been able substantially improve its performance, capacity and user interactivity. Three new modules have also been added including: (i) a module for biomarker analysis based on the calculation of receiver operating characteristic curves; (ii) a module for sample size estimation and power analysis for improved planning of metabolomics studies and (iii) a module to support integrative pathway analysis for both genes and metabolites. In addition, popular features found in existing modules have been significantly enhanced by upgrading the graphical output, expanding the compound libraries and by adding support for more diverse organisms. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Interpreter use in an inner city accident and emergency department.

    PubMed Central

    Leman, P

    1997-01-01

    OBJECTIVE: To determine the extent of communication problems that arose from patients whose primary language was non-English presenting to an inner city accident and emergency (A&E) department. METHODS: A prospective survey over seven consecutive days during September 1995. All adult patients other than those directly referred by their general practitioner to an inpatient team had a questionnaire completed by the A&E doctor first seeing the patient. The doctor recorded language ability and form of interpreter used, and estimated any prolongation of the consultation and ability to improve communication by the use of additional services. RESULTS: 103 patients (17%) did not speak English as their primary language; 55 patients (9.1% of the study population) had an English language ability rated as other than good, and 16 (29%) of these consultations could have been improved by the use of additional interpreter services; 28 patients overall (4.6% of the study population) required the use of an interpreter, who was usually a relative. CONCLUSIONS: A significant number of patients presenting to A&E have difficulty in communicating in English. These consultations could often have been improved by the use of additional interpreter services. Telephone interpreter services may provide the answer for use in A&E departments because of their instant and 24 hour availability. Images p99-a PMID:9132201

  6. Analysis and interpretation of cost data in randomised controlled trials: review of published studies

    PubMed Central

    Barber, Julie A; Thompson, Simon G

    1998-01-01

    Objective To review critically the statistical methods used for health economic evaluations in randomised controlled trials where an estimate of cost is available for each patient in the study. Design Survey of published randomised trials including an economic evaluation with cost values suitable for statistical analysis; 45 such trials published in 1995 were identified from Medline. Main outcome measures The use of statistical methods for cost data was assessed in terms of the descriptive statistics reported, use of statistical inference, and whether the reported conclusions were justified. Results Although all 45 trials reviewed apparently had cost data for each patient, only 9 (20%) reported adequate measures of variability for these data and only 25 (56%) gave results of statistical tests or a measure of precision for the comparison of costs between the randomised groups. Only 16 (36%) of the articles gave conclusions which were justified on the basis of results presented in the paper. No paper reported sample size calculations for costs. Conclusions The analysis and interpretation of cost data from published trials reveal a lack of statistical awareness. Strong and potentially misleading conclusions about the relative costs of alternative therapies have often been reported in the absence of supporting statistical evidence. Improvements in the analysis and reporting of health economic assessments are urgently required. Health economic guidelines need to be revised to incorporate more detailed statistical advice. Key messagesHealth economic evaluations required for important healthcare policy decisions are often carried out in randomised controlled trialsA review of such published economic evaluations assessed whether statistical methods for cost outcomes have been appropriately used and interpretedFew publications presented adequate descriptive information for costs or performed appropriate statistical analysesIn at least two thirds of the papers, the main conclusions regarding costs were not justifiedThe analysis and reporting of health economic assessments within randomised controlled trials urgently need improving PMID:9794854

  7. Detailed seismic velocity structure of the ultra-slow spread crust at the Mid-Cayman Spreading Center from travel-time tomography and synthetic seismograms

    NASA Astrophysics Data System (ADS)

    Harding, J.; Van Avendonk, H. J.; Hayman, N. W.; Grevemeyer, I.; Peirce, C.

    2017-12-01

    The Mid-Cayman Spreading Center (MCSC), an ultraslow-spreading center in the Caribbean Sea, has formed highly variable oceanic crust. Seafloor dredges have recovered extrusive basalts in the axial deeps as well as gabbro on bathymetric highs and exhumed mantle peridotite along the only 110 km MCSC. Wide-angle refraction data were collected with active-source ocean bottom seismometers in April, 2015, along lines parallel and across the MCSC. Travel-time tomography produces relatively smooth 2-D tomographic models of compressional wave velocity. These velocity models reveal large along- and across-axis variations in seismic velocity, indicating possible changes in crustal thickness, composition, faulting, and magmatism. It is difficult, however, to differentiate between competing interpretations of seismic velocity using these tomographic models alone. For example, in some areas the seismic velocities may be explained by either thin igneous crust or exhumed, serpentinized mantle. Distinguishing between these two interpretations is important as we explore the relationships between magmatism, faulting, and hydrothermal venting at ultraslow-spreading centers. We therefore improved our constraints on the shallow seismic velocity structure of the MCSC by modeling the amplitude of seismic refractions in the wide-angle data set. Synthetic seismograms were calculated with a finite-difference method for a range of models with different vertical velocity gradients. Small-scale features in the velocity models, such as steep velocity gradients and Moho boundaries, were explored systematically to best fit the real data. With this approach, we have improved our understanding of the compressional velocity structure of the MCSC along with the geological interpretations that are consistent with three seismic refraction profiles. Line P01 shows a variation in the thinness of lower seismic velocities along the axis, indicating two segment centers, while across-axis lines P02 and P03 show variations in igneous crustal thickness and exhumed mantle in some areas.

  8. Hydrogen Donor-Acceptor Fluctuations from Kinetic Isotope Effects: A Phenomenological Model

    PubMed Central

    Roston, Daniel; Cheatum, Christopher M.; Kohen, Amnon

    2012-01-01

    Kinetic isotope effects (KIEs) and their temperature dependence can probe the structural and dynamic nature of enzyme-catalyzed proton or hydride transfers. The molecular interpretation of their temperature dependence requires expensive and specialized QM/MM calculations to provide a quantitative molecular understanding. Currently available phenomenological models use a non-adiabatic assumption that is not appropriate for most hydride and proton-transfer reactions, while others require more parameters than the experimental data justify. Here we propose a phenomenological interpretation of KIEs based on a simple method to quantitatively link the size and temperature dependence of KIEs to a conformational distribution of the catalyzed reaction. The present model assumes adiabatic hydrogen tunneling, and by fitting experimental KIE data, the model yields a population distribution for fluctuations of the distance between donor and acceptor atoms. Fits to data from a variety of proton and hydride transfers catalyzed by enzymes and their mutants, as well as non-enzymatic reactions, reveal that steeply temperature-dependent KIEs indicate the presence of at least two distinct conformational populations, each with different kinetic behaviors. We present the results of these calculations for several published cases and discuss how the predictions of the calculations might be experimentally tested. The current analysis does not replace molecular quantum mechanics/molecular mechanics (QM/MM) investigations, but it provides a fast and accessible way to quantitatively interpret KIEs in the context of a Marcus-like model. PMID:22857146

  9. A training program for anthropometric measurements by a dedicated nutrition support team improves nutritional status assessment of the critically ill child.

    PubMed

    Valla, Frederic V; Ford-Chessel, Carole; Meyer, Rosan; Berthiller, Julien; Dupenloup, Christine; Follin-Arbelet, Nathalie; Hubert, Anna; Javouhey, Etienne; Peretti, Noel

    2015-03-01

    The cornerstone of an optimal nutrition approach in PICUs is to evaluate the nutritional status of any patient. Anthropometric measurements and nutritional indices calculation allow for nutritional status assessment, which is not often part of routine management, as it is considered difficult to perform in this setting. We designed a study to evaluate the impact of a training program by the PICU nutritional support team on the implementation of routine anthropometric measurements on our PICU. A prospective study was performed over a 2-year period, which included: a baseline evaluation of nutritional assessment, knowledge, anthropometric measurements (weight, height, and head and mid upper arm circumferences), and nutritional indices calculation in patient files. This was followed by a training program to implement the newly developed nutrition assessment guidelines, which included anthropometrical measurements and also the interpretation of these. The impact of this nutritional assessment program was reviewed annually for 2 years after the implementation. PICU--Lyon, France. PICU nursing and medical staff, and patients admitted in February 2011, 2012, and 2013. Training program. Ninety-nine percent of staff (n = 145) attended the individual teaching. We found significant progress in nutritional awareness and confidence about nutritional assessment following the teaching program. In addition, an improvement in staff knowledge about undernutrition and its consequences were found. We enrolled 41, 55, and 91 patients in 2011, 2012, and 2013, respectively. There was a significant increase in anthropometric measurements during this time: 32%, 65% (p = 0.002), and 96% in 2013 (p < 0.001). Nutritional indices were calculated in 20%, 74% (p < 0.001), and 96% (p < 0.001) of cases. This is the first study, showing that a targeted nutritional assessment teaching program that highlights both the importance and techniques of anthropometrical measurements has successfully been implemented in a PICU. It managed to improve staff knowledge and nutritional practice.

  10. Multiscale examination and modeling of electron transport in nanoscale materials and devices

    NASA Astrophysics Data System (ADS)

    Banyai, Douglas R.

    For half a century the integrated circuits (ICs) that make up the heart of electronic devices have been steadily improving by shrinking at an exponential rate. However, as the current crop of ICs get smaller and the insulating layers involved become thinner, electrons leak through due to quantum mechanical tunneling. This is one of several issues which will bring an end to this incredible streak of exponential improvement of this type of transistor device, after which future improvements will have to come from employing fundamentally different transistor architecture rather than fine tuning and miniaturizing the metal-oxide-semiconductor field effect transistors (MOSFETs) in use today. Several new transistor designs, some designed and built here at Michigan Tech, involve electrons tunneling their way through arrays of nanoparticles. We use a multi-scale approach to model these devices and study their behavior. For investigating the tunneling characteristics of the individual junctions, we use a first-principles approach to model conduction between sub-nanometer gold particles. To estimate the change in energy due to the movement of individual electrons, we use the finite element method to calculate electrostatic capacitances. The kinetic Monte Carlo method allows us to use our knowledge of these details to simulate the dynamics of an entire device---sometimes consisting of hundreds of individual particles---and watch as a device 'turns on' and starts conducting an electric current. Scanning tunneling microscopy (STM) and the closely related scanning tunneling spectroscopy (STS) are a family of powerful experimental techniques that allow for the probing and imaging of surfaces and molecules at atomic resolution. However, interpretation of the results often requires comparison with theoretical and computational models. We have developed a new method for calculating STM topographs and STS spectra. This method combines an established method for approximating the geometric variation of the electronic density of states, with a modern method for calculating spin-dependent tunneling currents, offering a unique balance between accuracy and accessibility.

  11. Probing New Long-Range Interactions by Isotope Shift Spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berengut, Julian C.; Budker, Dmitry; Delaunay, Cédric

    We explore a method to probe new long- and intermediate-range interactions using precision atomic isotope shift spectroscopy. We develop a formalism to interpret linear King plots as bounds on new physics with minimal theory inputs. We focus only on bounding the new physics contributions that can be calculated independently of the standard model nuclear effects. We apply our method to existing Ca + data and project its sensitivity to conjectured new bosons with spin-independent couplings to the electron and the neutron using narrow transitions in other atoms and ions, specifically, Sr and Yb. Future measurements are expected to improve themore » relative precision by 5 orders of magnitude, and they can potentially lead to an unprecedented sensitivity for bosons within the 0.3 to 10 MeV mass range.« less

  12. Probing New Long-Range Interactions by Isotope Shift Spectroscopy.

    PubMed

    Berengut, Julian C; Budker, Dmitry; Delaunay, Cédric; Flambaum, Victor V; Frugiuele, Claudia; Fuchs, Elina; Grojean, Christophe; Harnik, Roni; Ozeri, Roee; Perez, Gilad; Soreq, Yotam

    2018-03-02

    We explore a method to probe new long- and intermediate-range interactions using precision atomic isotope shift spectroscopy. We develop a formalism to interpret linear King plots as bounds on new physics with minimal theory inputs. We focus only on bounding the new physics contributions that can be calculated independently of the standard model nuclear effects. We apply our method to existing Ca^{+} data and project its sensitivity to conjectured new bosons with spin-independent couplings to the electron and the neutron using narrow transitions in other atoms and ions, specifically, Sr and Yb. Future measurements are expected to improve the relative precision by 5 orders of magnitude, and they can potentially lead to an unprecedented sensitivity for bosons within the 0.3 to 10 MeV mass range.

  13. Probing New Long-Range Interactions by Isotope Shift Spectroscopy

    DOE PAGES

    Berengut, Julian C.; Budker, Dmitry; Delaunay, Cédric; ...

    2018-02-26

    We explore a method to probe new long- and intermediate-range interactions using precision atomic isotope shift spectroscopy. We develop a formalism to interpret linear King plots as bounds on new physics with minimal theory inputs. We focus only on bounding the new physics contributions that can be calculated independently of the standard model nuclear effects. We apply our method to existing Ca + data and project its sensitivity to conjectured new bosons with spin-independent couplings to the electron and the neutron using narrow transitions in other atoms and ions, specifically, Sr and Yb. Future measurements are expected to improve themore » relative precision by 5 orders of magnitude, and they can potentially lead to an unprecedented sensitivity for bosons within the 0.3 to 10 MeV mass range.« less

  14. Statistical analysis of arthroplasty data

    PubMed Central

    2011-01-01

    It is envisaged that guidelines for statistical analysis and presentation of results will improve the quality and value of research. The Nordic Arthroplasty Register Association (NARA) has therefore developed guidelines for the statistical analysis of arthroplasty register data. The guidelines are divided into two parts, one with an introduction and a discussion of the background to the guidelines (Ranstam et al. 2011a, see pages x-y in this issue), and this one with a more technical statistical discussion on how specific problems can be handled. This second part contains (1) recommendations for the interpretation of methods used to calculate survival, (2) recommendations on howto deal with bilateral observations, and (3) a discussion of problems and pitfalls associated with analysis of factors that influence survival or comparisons between outcomes extracted from different hospitals. PMID:21619500

  15. Modeling 15N NMR chemical shift changes in protein backbone with pressure

    NASA Astrophysics Data System (ADS)

    La Penna, Giovanni; Mori, Yoshiharu; Kitahara, Ryo; Akasaka, Kazuyuki; Okamoto, Yuko

    2016-08-01

    Nitrogen chemical shift is a useful parameter for determining the backbone three-dimensional structure of proteins. Empirical models for fast calculation of N chemical shift are improving their reliability, but there are subtle effects that cannot be easily interpreted. Among these, the effects of slight changes in hydrogen bonds, both intramolecular and with water molecules in the solvent, are particularly difficult to predict. On the other hand, these hydrogen bonds are sensitive to changes in protein environment. In this work, the change of N chemical shift with pressure for backbone segments in the protein ubiquitin is correlated with the change in the population of hydrogen bonds involving the backbone amide group. The different extent of interaction of protein backbone with the water molecules in the solvent is put in evidence.

  16. Advancements in dynamic kill calculations for blowout wells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kouba, G.E.; MacDougall, G.R.; Schumacher, B.W.

    1993-09-01

    This paper addresses the development, interpretation, and use of dynamic kill equations. To this end, three simple calculation techniques are developed for determining the minimum dynamic kill rate. Two techniques contain only single-phase calculations and are independent of reservoir inflow performance. Despite these limitations, these two methods are useful for bracketing the minimum flow rates necessary to kill a blowing well. For the third technique, a simplified mechanistic multiphase-flow model is used to determine a most-probable minimum kill rate.

  17. Approximate calculation of multispar cantilever and semicantilever wings with parallel ribs under direct and indirect loading

    NASA Technical Reports Server (NTRS)

    Sanger, Eugen

    1932-01-01

    A method is presented for approximate static calculation, which is based on the customary assumption of rigid ribs, while taking into account the systematic errors in the calculation results due to this arbitrary assumption. The procedure is given in greater detail for semicantilever and cantilever wings with polygonal spar plan form and for wings under direct loading only. The last example illustrates the advantages of the use of influence lines for such wing structures and their practical interpretation.

  18. Calculation of optical and K pre-edge absorption spectra for ferrous iron of distorted sites in oxide crystals

    NASA Astrophysics Data System (ADS)

    Vercamer, Vincent; Hunault, Myrtille O. J. Y.; Lelong, Gérald; Haverkort, Maurits W.; Calas, Georges; Arai, Yusuke; Hijiya, Hiroyuki; Paulatto, Lorenzo; Brouder, Christian; Arrio, Marie-Anne; Juhin, Amélie

    2016-12-01

    Advanced semiempirical calculations have been performed to compute simultaneously optical absorption and K pre-edge x-ray absorption spectra of Fe2 + in four distinct site symmetries found in minerals. The four symmetries, i.e., a distorted octahedron, a distorted tetrahedron, a square planar site, and a trigonal bipyramidal site, are representative of the Fe2 + sites found in crystals and glasses. A particular attention has been paid to the definition of the p -d hybridization Hamiltonian which occurs for noncentrosymmetric symmetries in order to account for electric dipole transitions. For the different sites under study, an excellent agreement between calculations and experiments was found for both optical and x-ray absorption spectra, in particular in terms of relative intensities and energy positions of electronic transitions. To our knowledge, these are the first calculations of optical absorption spectra on Fe2 + placed in such diverse site symmetries, including centrosymmetric sites. The proposed theoretical model should help to interpret the features of both the optical absorption and the K pre-edge absorption spectra of 3 d transition metal ions and to go beyond the usual fingerprint interpretation.

  19. Structure and vibrational spectra of melaminium bis(trifluoroacetate) trihydrate: FT-IR, FT-Raman and quantum chemical calculations.

    PubMed

    Sangeetha, V; Govindarajan, M; Kanagathara, N; Marchewka, M K; Gunasekaran, S; Anbalagan, G

    2014-05-05

    Melaminium bis(trifluoroacetate) trihydrate (MTFA), an organic material has been synthesized and single crystals of MTFA have been grown by the slow solvent evaporation method at room temperature. X-ray powder diffraction analysis confirms that MTFA crystal belongs to the monoclinic system with space group P2/c. The molecular geometry, vibrational frequencies and intensity of the vibrational bands have been interpreted with the aid of structure optimization based on density functional theory (DFT) B3LYP method with 6-311G(d,p) and 6-311++G(d,p) basis sets. The X-ray diffraction data have been compared with the data of optimized molecular structure. The theoretical results show that the crystal structure can be reproduced by optimized geometry and the vibrational frequencies show good agreement with the experimental values. The nuclear magnetic resonance (NMR) chemical shift of the molecule has been calculated by the gauge independent atomic orbital (GIAO) method and compared with experimental results. HOMO-LUMO, and other related molecular and electronic properties are calculated. The Mulliken and NBO charges have also been calculated and interpreted. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Student Use of Physics to Make Sense of Incomplete but Functional VPython Programs in a Lab Setting

    NASA Astrophysics Data System (ADS)

    Weatherford, Shawn A.

    2011-12-01

    Computational activities in Matter & Interactions, an introductory calculus-based physics course, have the instructional goal of providing students with the experience of applying the same set of a small number of fundamental principles to model a wide range of physical systems. However there are significant instructional challenges for students to build computer programs under limited time constraints, especially for students who are unfamiliar with programming languages and concepts. Prior attempts at designing effective computational activities were successful at having students ultimately build working VPython programs under the tutelage of experienced teaching assistants in a studio lab setting. A pilot study revealed that students who completed these computational activities had significant difficultly repeating the exact same tasks and further, had difficulty predicting the animation that would be produced by the example program after interpreting the program code. This study explores the interpretation and prediction tasks as part of an instructional sequence where students are asked to read and comprehend a functional, but incomplete program. Rather than asking students to begin their computational tasks with modifying program code, we explicitly ask students to interpret an existing program that is missing key lines of code. The missing lines of code correspond to the algebraic form of fundamental physics principles or the calculation of forces which would exist between analogous physical objects in the natural world. Students are then asked to draw a prediction of what they would see in the simulation produced by the VPython program and ultimately run the program to evaluate the students' prediction. This study specifically looks at how the participants use physics while interpreting the program code and creating a whiteboard prediction. This study also examines how students evaluate their understanding of the program and modification goals at the beginning of the modification task. While working in groups over the course of a semester, study participants were recorded while they completed three activities using these incomplete programs. Analysis of the video data showed that study participants had little difficulty interpreting physics quantities, generating a prediction, or determining how to modify the incomplete program. Participants did not base their prediction solely from the information from the incomplete program. When participants tried to predict the motion of the objects in the simulation, many turned to their knowledge of how the system would evolve if it represented an analogous real-world physical system. For example, participants attributed the real-world behavior of springs to helix objects even though the program did not include calculations for the spring to exert a force when stretched. Participants rarely interpreted lines of code in the computational loop during the first computational activity, but this changed during latter computational activities with most participants using their physics knowledge to interpret the computational loop. Computational activities in the Matter & Interactions curriculum were revised in light of these findings to include an instructional sequence of tasks to build a comprehension of the example program. The modified activities also ask students to create an additional whiteboard prediction for the time-evolution of the real-world phenomena which the example program will eventually model. This thesis shows how comprehension tasks identified by Palinscar and Brown (1984) as effective in improving reading comprehension are also effective in helping students apply their physics knowledge to interpret a computer program which attempts to model a real-world phenomena and identify errors in their understanding of the use, or omission, of fundamental physics principles in a computational model.

  1. Digital recovery, modification, and analysis of Tetra Tech seismic horizon mapping, National Petroleum Reserve Alaska (NPRA), northern Alaska

    USGS Publications Warehouse

    Saltus, R.W.; Kulander, Christopher S.; Potter, Christopher J.

    2002-01-01

    We have digitized, modified, and analyzed seismic interpretation maps of 12 subsurface stratigraphic horizons spanning portions of the National Petroleum Reserve in Alaska (NPRA). These original maps were prepared by Tetra Tech, Inc., based on about 15,000 miles of seismic data collected from 1974 to 1981. We have also digitized interpreted faults and seismic velocities from Tetra Tech maps. The seismic surfaces were digitized as two-way travel time horizons and converted to depth using Tetra Tech seismic velocities. The depth surfaces were then modified by long-wavelength corrections based on recent USGS seismic re-interpretation along regional seismic lines. We have developed and executed an algorithm to identify and calculate statistics on the area, volume, height, and depth of closed structures based on these seismic horizons. These closure statistics are tabulated and have been used as input to oil and gas assessment calculations for the region. Directories accompanying this report contain basic digitized data, processed data, maps, tabulations of closure statistics, and software relating to this project.

  2. How Chinese Semantics Capability Improves Interpretation in Visual Communication

    ERIC Educational Resources Information Center

    Cheng, Chu-Yu; Ou, Yang-Kun; Kin, Ching-Lung

    2017-01-01

    A visual representation involves delivering messages through visually communicated images. The study assumed that semantic recognition can affect visual interpretation ability, and the result showed that students graduating from a general high school achieve satisfactory results in semantic recognition and image interpretation tasks than students…

  3. Estimating the number needed to treat from continuous outcomes in randomised controlled trials: methodological challenges and worked example using data from the UK Back Pain Exercise and Manipulation (BEAM) trial.

    PubMed

    Froud, Robert; Eldridge, Sandra; Lall, Ranjit; Underwood, Martin

    2009-06-11

    Reporting numbers needed to treat (NNT) improves interpretability of trial results. It is unusual that continuous outcomes are converted to numbers of individual responders to treatment (i.e., those who reach a particular threshold of change); and deteriorations prevented are only rarely considered. We consider how numbers needed to treat can be derived from continuous outcomes; illustrated with a worked example showing the methods and challenges. We used data from the UK BEAM trial (n = 1, 334) of physical treatments for back pain; originally reported as showing, at best, small to moderate benefits. Participants were randomised to receive 'best care' in general practice, the comparator treatment, or one of three manual and/or exercise treatments: 'best care' plus manipulation, exercise, or manipulation followed by exercise. We used established consensus thresholds for improvement in Roland-Morris disability questionnaire scores at three and twelve months to derive NNTs for improvements and for benefits (improvements gained+deteriorations prevented). At three months, NNT estimates ranged from 5.1 (95% CI 3.4 to 10.7) to 9.0 (5.0 to 45.5) for exercise, 5.0 (3.4 to 9.8) to 5.4 (3.8 to 9.9) for manipulation, and 3.3 (2.5 to 4.9) to 4.8 (3.5 to 7.8) for manipulation followed by exercise. Corresponding between-group mean differences in the Roland-Morris disability questionnaire were 1.6 (0.8 to 2.3), 1.4 (0.6 to 2.1), and 1.9 (1.2 to 2.6) points. In contrast to small mean differences originally reported, NNTs were small and could be attractive to clinicians, patients, and purchasers. NNTs can aid the interpretation of results of trials using continuous outcomes. Where possible, these should be reported alongside mean differences. Challenges remain in calculating NNTs for some continuous outcomes. UK BEAM trial registration: ISRCTN32683578.

  4. Wind Tunnel Measurements and Calculations of Aerodynamic Interactions Between Tiltrotor Aircraft

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne; Yamauchi, Gloria K.; Derby, Michael R.; Wadcock, Alan J.

    2002-01-01

    Wind tunnel measurements and calculations of the aerodynamic interactions between two tiltrotor aircraft in helicopter mode are presented. The measured results include the roll moment and thrust change on the downwind aircraft, as a function of the upwind aircraft position (longitudinal, lateral, and vertical). Magnitudes and locations of the largest interactions are identified. The calculated interactions generally match the measurements, with discrepancies attributed to the unsteadiness of the wake and aerodynamic forces on the airframe. To interpret the interactions in terms of control and power changes on the aircraft, additional calculations are presented for trimmed aircraft with gimballed rotors.

  5. The Cladistic Basis for the Phylogenetic Diversity (PD) Measure Links Evolutionary Features to Environmental Gradients and Supports Broad Applications of Microbial Ecology’s “Phylogenetic Beta Diversity” Framework

    PubMed Central

    Faith, Daniel P.; Lozupone, Catherine A.; Nipperess, David; Knight, Rob

    2009-01-01

    The PD measure of phylogenetic diversity interprets branch lengths cladistically to make inferences about feature diversity. PD calculations extend conventional species-level ecological indices to the features level. The “phylogenetic beta diversity” framework developed by microbial ecologists calculates PD-dissimilarities between community localities. Interpretation of these PD-dissimilarities at the feature level explains the framework’s success in producing ordinations revealing environmental gradients. An example gradients space using PD-dissimilarities illustrates how evolutionary features form unimodal response patterns to gradients. This features model supports new application of existing species-level methods that are robust to unimodal responses, plus novel applications relating to climate change, commercial products discovery, and community assembly. PMID:20087461

  6. Critical exponents for diluted resistor networks

    NASA Astrophysics Data System (ADS)

    Stenull, O.; Janssen, H. K.; Oerding, K.

    1999-05-01

    An approach by Stephen [Phys. Rev. B 17, 4444 (1978)] is used to investigate the critical properties of randomly diluted resistor networks near the percolation threshold by means of renormalized field theory. We reformulate an existing field theory by Harris and Lubensky [Phys. Rev. B 35, 6964 (1987)]. By a decomposition of the principal Feynman diagrams, we obtain diagrams which again can be interpreted as resistor networks. This interpretation provides for an alternative way of evaluating the Feynman diagrams for random resistor networks. We calculate the resistance crossover exponent φ up to second order in ɛ=6-d, where d is the spatial dimension. Our result φ=1+ɛ/42+4ɛ2/3087 verifies a previous calculation by Lubensky and Wang, which itself was based on the Potts-model formulation of the random resistor network.

  7. A review of computer aided interpretation technology for the evaluation of radiographs of aluminum welds

    NASA Technical Reports Server (NTRS)

    Lloyd, J. F., Sr.

    1987-01-01

    Industrial radiography is a well established, reliable means of providing nondestructive structural integrity information. The majority of industrial radiographs are interpreted by trained human eyes using transmitted light and various visual aids. Hundreds of miles of radiographic information are evaluated, documented and archived annually. In many instances, there are serious considerations in terms of interpreter fatigue, subjectivity and limited archival space. Quite often it is difficult to quickly retrieve radiographic information for further analysis or investigation. Methods of improving the quality and efficiency of the radiographic process are being explored, developed and incorporated whenever feasible. High resolution cameras, digital image processing, and mass digital data storage offer interesting possibilities for improving the industrial radiographic process. A review is presented of computer aided radiographic interpretation technology in terms of how it could be used to enhance the radiographic interpretation process in evaluating radiographs of aluminum welds.

  8. Comparison of lysimeter based and calculated ASCE reference evapotranspiration in a subhumid climate

    NASA Astrophysics Data System (ADS)

    Nolz, Reinhard; Cepuder, Peter; Eitzinger, Josef

    2016-04-01

    The standardized form of the well-known FAO Penman-Monteith equation, published by the Environmental and Water Resources Institute of the American Society of Civil Engineers (ASCE-EWRI), is recommended as a standard procedure for calculating reference evapotranspiration (ET ref) and subsequently plant water requirements. Applied and validated under different climatic conditions it generally achieved good results compared to other methods. However, several studies documented deviations between measured and calculated reference evapotranspiration depending on environmental and weather conditions. Therefore, it seems generally advisable to evaluate the model under local environmental conditions. In this study, reference evapotranspiration was determined at a subhumid site in northeastern Austria from 2005 to 2010 using a large weighing lysimeter (ET lys). The measured data were compared with ET ref calculations. Daily values differed slightly during a year, at which ET ref was generally overestimated at small values, whereas it was rather underestimated when ET was large, which is supported also by other studies. In our case, advection of sensible heat proved to have an impact, but it could not explain the differences exclusively. Obviously, there were also other influences, such as seasonal varying surface resistance or albedo. Generally, the ASCE-EWRI equation for daily time steps performed best at average weather conditions. The outcomes should help to correctly interpret ET ref data in the region and in similar environments and improve knowledge on the dynamics of influencing factors causing deviations.

  9. 3D printing of patient-specific anatomy: A tool to improve patient consent and enhance imaging interpretation by trainees.

    PubMed

    Liew, Yaoren; Beveridge, Erin; Demetriades, Andreas K; Hughes, Mark A

    2015-01-01

    We report the use of three-dimensional or 3D printed, patient-specific anatomy as a tool to improve informed patient consent and patient understanding in a case of posterior lumbar fixation. Next, we discuss its utility as an educational tool to enhance imaging interpretation by neurosurgery trainees.

  10. a Kml-Based Approach for Distributed Collaborative Interpretation of Remote Sensing Images in the Geo-Browser

    NASA Astrophysics Data System (ADS)

    Huang, L.; Zhu, X.; Guo, W.; Xiang, L.; Chen, X.; Mei, Y.

    2012-07-01

    Existing implementations of collaborative image interpretation have many limitations for very large satellite imageries, such as inefficient browsing, slow transmission, etc. This article presents a KML-based approach to support distributed, real-time, synchronous collaborative interpretation for remote sensing images in the geo-browser. As an OGC standard, KML (Keyhole Markup Language) has the advantage of organizing various types of geospatial data (including image, annotation, geometry, etc.) in the geo-browser. Existing KML elements can be used to describe simple interpretation results indicated by vector symbols. To enlarge its application, this article expands KML elements to describe some complex image processing operations, including band combination, grey transformation, geometric correction, etc. Improved KML is employed to describe and share interpretation operations and results among interpreters. Further, this article develops some collaboration related services that are collaboration launch service, perceiving service and communication service. The launch service creates a collaborative interpretation task and provides a unified interface for all participants. The perceiving service supports interpreters to share collaboration awareness. Communication service provides interpreters with written words communication. Finally, the GeoGlobe geo-browser (an extensible and flexible geospatial platform developed in LIESMARS) is selected to perform experiments of collaborative image interpretation. The geo-browser, which manage and visualize massive geospatial information, can provide distributed users with quick browsing and transmission. Meanwhile in the geo-browser, GIS data (for example DEM, DTM, thematic map and etc.) can be integrated to assist in improving accuracy of interpretation. Results show that the proposed method is available to support distributed collaborative interpretation of remote sensing image

  11. Development and prospective evaluation of an automated software system for quality control of quantitative 99mTc-MAG3 renal studies.

    PubMed

    Folks, Russell D; Garcia, Ernest V; Taylor, Andrew T

    2007-03-01

    Quantitative nuclear renography has numerous potential sources of error. We previously reported the initial development of a computer software module for comprehensively addressing the issue of quality control (QC) in the analysis of radionuclide renal images. The objective of this study was to prospectively test the QC software. The QC software works in conjunction with standard quantitative renal image analysis using a renal quantification program. The software saves a text file that summarizes QC findings as possible errors in user-entered values, calculated values that may be unreliable because of the patient's clinical condition, and problems relating to acquisition or processing. To test the QC software, a technologist not involved in software development processed 83 consecutive nontransplant clinical studies. The QC findings of the software were then tabulated. QC events were defined as technical (study descriptors that were out of range or were entered and then changed, unusually sized or positioned regions of interest, or missing frames in the dynamic image set) or clinical (calculated functional values judged to be erroneous or unreliable). Technical QC events were identified in 36 (43%) of 83 studies. Clinical QC events were identified in 37 (45%) of 83 studies. Specific QC events included starting the camera after the bolus had reached the kidney, dose infiltration, oversubtraction of background activity, and missing frames in the dynamic image set. QC software has been developed to automatically verify user input, monitor calculation of renal functional parameters, summarize QC findings, and flag potentially unreliable values for the nuclear medicine physician. Incorporation of automated QC features into commercial or local renal software can reduce errors and improve technologist performance and should improve the efficiency and accuracy of image interpretation.

  12. The Use of Field Trips in Air-Photo Interpretation and Remote-Sensing Classes.

    ERIC Educational Resources Information Center

    Giardino, John Richard; Fish, Ernest Bertley

    1986-01-01

    Advocates the use of field trips for improving students' image-interpretation abilities. Presents guidelines for developing a field trip for an aerial-photo interpretation class or a remote-sensing class. Reviews methodology employed, content emphasis, and includes an exercise that was used on a trip. (ML)

  13. Landscape Interpretation with Augmented Reality and Maps to Improve Spatial Orientation Skill

    ERIC Educational Resources Information Center

    Carbonell Carrera, Carlos; Bermejo Asensio, Luis A.

    2017-01-01

    Landscape interpretation is needed for navigating and determining an orientation: with traditional cartography, interpreting 3D topographic information from 2D landform representations to get self-location requires spatial orientation skill. Augmented reality technology allows a new way to interact with 3D landscape representation and thereby…

  14. Standardizing Interpretive Training to Create a More Meaningful Visitor Experience

    ERIC Educational Resources Information Center

    Carr, Rob

    2016-01-01

    Implementing a standardized interpretive training and mentoring program across multiple departments has helped created a shared language that staff and volunteers use to collaborate and evaluate interpretive programs and products. This has led to more efficient and effective training and measurable improvements in the quality of the visitor's…

  15. Oral Interpretation and Self-Disclosure: A Speculation.

    ERIC Educational Resources Information Center

    Buzza, Bonnie W.

    Effective oral interpretation, like effective communication, is self-revealing. Teachers and students of oral interpretation can improve analysis and performance of the literature by remaining aware of, first, their own involvement in the presentation and, second, the effect of this personal involvement on the audience. In the performance of oral…

  16. Exploring the Legacies of Filmed Patient Narratives: The Interpretation and Appropriation of Patient Films by Health Care Staff.

    PubMed

    Adams, Mary; Robert, Glenn; Maben, Jill

    2015-09-01

    We trace the legacies of filmed patient narratives that were edited and screened to encourage engagement with a participatory quality improvement project in an acute hospital setting in England. Using Gabriel's theory of "narrative contract," we examine the initial success of the films in establishing common grounds for participatory project and later, and more varied, interpretations of the films. Over time, the films were interpreted by staff as either useful sources of learning by critical reflection, dubious (invalid or unreliable) representations of patient experience, or as "closed" items available as auditable evidence of completed quality improvement work. We find these interpretations of the films to be shaped by the effect of social distance, the differential outcomes of project work, and changing organizational agendas. We consider the wider conditions of patient narrative as a form of quality improvement knowledge with immediate potency and fragile or fluid legitimacy over time. © The Author(s) 2015.

  17. Can digital pathology result in cost savings? A financial projection for digital pathology implementation at a large integrated health care organization.

    PubMed

    Ho, Jonhan; Ahlers, Stefan M; Stratman, Curtis; Aridor, Orly; Pantanowitz, Liron; Fine, Jeffrey L; Kuzmishin, John A; Montalto, Michael C; Parwani, Anil V

    2014-01-01

    Digital pathology offers potential improvements in workflow and interpretive accuracy. Although currently digital pathology is commonly used for research and education, its clinical use has been limited to niche applications such as frozen sections and remote second opinion consultations. This is mainly due to regulatory hurdles, but also to a dearth of data supporting a positive economic cost-benefit. Large scale adoption of digital pathology and the integration of digital slides into the routine anatomic/surgical pathology "slide less" clinical workflow will occur only if digital pathology will offer a quantifiable benefit, which could come in the form of more efficient and/or higher quality care. As a large academic-based health care organization expecting to adopt digital pathology for primary diagnosis upon its regulatory approval, our institution estimated potential operational cost savings offered by the implementation of an enterprise-wide digital pathology system (DPS). Projected cost savings were calculated for the first 5 years following implementation of a DPS based on operational data collected from the pathology department. Projected savings were based on two factors: (1) Productivity and lab consolidation savings; and (2) avoided treatment costs due to improvements in the accuracy of cancer diagnoses among nonsubspecialty pathologists. Detailed analyses of incremental treatment costs due to interpretive errors, resulting in either a false positive or false negative diagnosis, was performed for melanoma and breast cancer and extrapolated to 10 other common cancers. When phased in over 5-years, total cost savings based on anticipated improvements in pathology productivity and histology lab consolidation were estimated at $12.4 million for an institution with 219,000 annual accessions. The main contributing factors to these savings were gains in pathologist clinical full-time equivalent capacity impacted by improved pathologist productivity and workload distribution. Expanding the current localized specialty sign-out model to an enterprise-wide shared general/subspecialist sign-out model could potentially reduce costs of incorrect treatment by $5.4 million. These calculations were based on annual over and under treatment costs for breast cancer and melanoma estimated to be approximately $26,000 and $11,000/case, respectively, and extrapolated to $21,500/case for other cancer types. The projected 5-year total cost savings for our large academic-based health care organization upon fully implementing a DPS was approximately $18 million. If the costs of digital pathology acquisition and implementation do not exceed this value, the return on investment becomes attractive to hospital administrators. Furthermore, improved patient outcome enabled by this technology strengthens the argument supporting adoption of an enterprise-wide DPS.

  18. Can Digital Pathology Result In Cost Savings? A Financial Projection For Digital Pathology Implementation At A Large Integrated Health Care Organization

    PubMed Central

    Ho, Jonhan; Ahlers, Stefan M.; Stratman, Curtis; Aridor, Orly; Pantanowitz, Liron; Fine, Jeffrey L.; Kuzmishin, John A.; Montalto, Michael C.; Parwani, Anil V.

    2014-01-01

    Background: Digital pathology offers potential improvements in workflow and interpretive accuracy. Although currently digital pathology is commonly used for research and education, its clinical use has been limited to niche applications such as frozen sections and remote second opinion consultations. This is mainly due to regulatory hurdles, but also to a dearth of data supporting a positive economic cost-benefit. Large scale adoption of digital pathology and the integration of digital slides into the routine anatomic/surgical pathology “slide less” clinical workflow will occur only if digital pathology will offer a quantifiable benefit, which could come in the form of more efficient and/or higher quality care. Aim: As a large academic-based health care organization expecting to adopt digital pathology for primary diagnosis upon its regulatory approval, our institution estimated potential operational cost savings offered by the implementation of an enterprise-wide digital pathology system (DPS). Methods: Projected cost savings were calculated for the first 5 years following implementation of a DPS based on operational data collected from the pathology department. Projected savings were based on two factors: (1) Productivity and lab consolidation savings; and (2) avoided treatment costs due to improvements in the accuracy of cancer diagnoses among nonsubspecialty pathologists. Detailed analyses of incremental treatment costs due to interpretive errors, resulting in either a false positive or false negative diagnosis, was performed for melanoma and breast cancer and extrapolated to 10 other common cancers. Results: When phased in over 5-years, total cost savings based on anticipated improvements in pathology productivity and histology lab consolidation were estimated at $12.4 million for an institution with 219,000 annual accessions. The main contributing factors to these savings were gains in pathologist clinical full-time equivalent capacity impacted by improved pathologist productivity and workload distribution. Expanding the current localized specialty sign-out model to an enterprise-wide shared general/subspecialist sign-out model could potentially reduce costs of incorrect treatment by $5.4 million. These calculations were based on annual over and under treatment costs for breast cancer and melanoma estimated to be approximately $26,000 and $11,000/case, respectively, and extrapolated to $21,500/case for other cancer types. Conclusions: The projected 5-year total cost savings for our large academic-based health care organization upon fully implementing a DPS was approximately $18 million. If the costs of digital pathology acquisition and implementation do not exceed this value, the return on investment becomes attractive to hospital administrators. Furthermore, improved patient outcome enabled by this technology strengthens the argument supporting adoption of an enterprise-wide DPS. PMID:25250191

  19. Temperature Sensitivity as a Microbial Trait Using Parameters from Macromolecular Rate Theory

    PubMed Central

    Alster, Charlotte J.; Baas, Peter; Wallenstein, Matthew D.; Johnson, Nels G.; von Fischer, Joseph C.

    2016-01-01

    The activity of soil microbial extracellular enzymes is strongly controlled by temperature, yet the degree to which temperature sensitivity varies by microbe and enzyme type is unclear. Such information would allow soil microbial enzymes to be incorporated in a traits-based framework to improve prediction of ecosystem response to global change. If temperature sensitivity varies for specific soil enzymes, then determining the underlying causes of variation in temperature sensitivity of these enzymes will provide fundamental insights for predicting nutrient dynamics belowground. In this study, we characterized how both microbial taxonomic variation as well as substrate type affects temperature sensitivity. We measured β-glucosidase, leucine aminopeptidase, and phosphatase activities at six temperatures: 4, 11, 25, 35, 45, and 60°C, for seven different soil microbial isolates. To calculate temperature sensitivity, we employed two models, Arrhenius, which predicts an exponential increase in reaction rate with temperature, and Macromolecular Rate Theory (MMRT), which predicts rate to peak and then decline as temperature increases. We found MMRT provided a more accurate fit and allowed for more nuanced interpretation of temperature sensitivity in all of the enzyme × isolate combinations tested. Our results revealed that both the enzyme type and soil isolate type explain variation in parameters associated with temperature sensitivity. Because we found temperature sensitivity to be an inherent and variable property of an enzyme, we argue that it can be incorporated as a microbial functional trait, but only when using the MMRT definition of temperature sensitivity. We show that the Arrhenius metrics of temperature sensitivity are overly sensitive to test conditions, with activation energy changing depending on the temperature range it was calculated within. Thus, we propose the use of the MMRT definition of temperature sensitivity for accurate interpretation of temperature sensitivity of soil microbial enzymes. PMID:27909429

  20. Determination of regional Euler pole parameters for Eastern Austria

    NASA Astrophysics Data System (ADS)

    Umnig, Elke; Weber, Robert; Schartner, Matthias; Brueckl, Ewald

    2017-04-01

    The horizontal motion of lithospheric plates can be described as rotations around a rotation axes through the Earth's center. The two possible points where this axes intersects the surface of the Earth are called Euler poles. The rotation is expressed by the Euler parameters in terms of angular velocities together with the latitude and longitude of the Euler pole. Euler parameters were calculated from GPS data for a study area in Eastern Austria. The observation network is located along the Mur-Mürz Valley and the Vienna Basin. This zone is part of the Vienna Transfer Fault, which is the major fault system between the Eastern Alps and the Carpathians. The project ALPAACT (seismological and geodetic monitoring of ALpine-PAnnonian ACtive Tectonics) investigated intra plate tectonic movements within the Austrian part in order to estimate the seismic hazard. Precise site coordinate time series established from processing 5 years of GPS observations are available for the regional network spanning the years from 2010.0 to 2015.0. Station velocities with respect to the global reference frame ITRF2008 have been computed for 23 sites. The common Euler vector was estimated on base of a subset of reliable site velocities, for stations directly located within the area of interest. In a further step a geokinematic interpretation shall be carried out. Therefore site motions with respect to the Eurasian Plate are requested. To obtain this motion field different variants are conceivable. In a simple approach the mean ITRF2008 velocity of IGS site GRAZ can be adopted as Eurasian rotational velocity. An improved alternative is to calculate site-specific velocity differences between the Euler rotation and the individual site velocities. In this poster presentation the Euler parameters, the residual motion field as well as first geokinematic interpretation results are presented.

  1. Considering relatives when assessing the evidential strength of mixed DNA profiles.

    PubMed

    Taylor, Duncan; Bright, Jo-Anne; Buckleton, John

    2014-11-01

    Sophisticated methods of DNA profile interpretation have enabled scientists to calculate weights for genotype sets proposed to explain some observed data. Using standard formulae these weights can be incorporated into an LR calculation that considers two competing propositions. We demonstrate here how consideration of relatedness to the person of interest can be incorporated into a LR calculation and how the same calculation can be used for familial searches of complex mixtures. We provide a general formula that can be used in semi or fully automated methods of calculation and demonstrate their use by working through an example. Crown Copyright © 2014. Published by Elsevier Ireland Ltd. All rights reserved.

  2. Improving visual observation skills through the arts to aid radiographic interpretation in veterinary practice: A pilot study.

    PubMed

    Beck, Cathy; Gaunt, Heather; Chiavaroli, Neville

    2017-09-01

    Radiographic interpretation is a perceptual and cognitive skill. Recently core veterinary radiology textbooks have focused on the cognitive (i.e., the clinical aspects of radiographic interpretation) rather than the features of visual observation that improve identification of abnormalities. As a result, the skill of visual observation is underemphasized and thus often underdeveloped by trainees. The study of the arts in medical education has been used to train and improve visual observation and empathy. The use of the arts to improve visual observation skills in Veterinary Science has not been previously described. Objectives of this pilot study were to adapt the existing Visual Arts in Health Education Program for medical and dental students at the University of Melbourne, Australia to third year Doctor of Veterinary Medicine students and evaluate their perceptions regarding the program's effects on visual observation skills and confidence with respect to radiographic interpretation. This adaptation took the form of a single seminar given to third year Doctor of Veterinary Medicine students. Following the seminar, students reported an improved approach to radiographic interpretation and felt they had gained skills which would assist them throughout their career. In the year following the seminar, written reports of the students who attended the seminar were compared with reports from a matched cohort of students who did not attend the seminar. This demonstrated increased identification of abnormalities and greater description of the abnormalities identified. Findings indicated that explicit training in visual observation may be a valuable adjunct to the radiology training of Doctor of Veterinary Medicine students. © 2017 American College of Veterinary Radiology.

  3. Fusion of infrared polarization and intensity images based on improved toggle operator

    NASA Astrophysics Data System (ADS)

    Zhu, Pan; Ding, Lei; Ma, Xiaoqing; Huang, Zhanhua

    2018-01-01

    Integration of infrared polarization and intensity images has been a new topic in infrared image understanding and interpretation. The abundant infrared details and target from infrared image and the salient edge and shape information from polarization image should be preserved or even enhanced in the fused result. In this paper, a new fusion method is proposed for infrared polarization and intensity images based on the improved multi-scale toggle operator with spatial scale, which can effectively extract the feature information of source images and heavily reduce redundancy among different scale. Firstly, the multi-scale image features of infrared polarization and intensity images are respectively extracted at different scale levels by the improved multi-scale toggle operator. Secondly, the redundancy of the features among different scales is reduced by using spatial scale. Thirdly, the final image features are combined by simply adding all scales of feature images together, and a base image is calculated by performing mean value weighted method on smoothed source images. Finally, the fusion image is obtained by importing the combined image features into the base image with a suitable strategy. Both objective assessment and subjective vision of the experimental results indicate that the proposed method obtains better performance in preserving the details and edge information as well as improving the image contrast.

  4. Structurally Driven Enhancement of Resonant Tunneling and Nanomechanical Properties in Diamond-like Carbon Superlattices.

    PubMed

    Dwivedi, Neeraj; McIntosh, Ross; Dhand, Chetna; Kumar, Sushil; Malik, Hitendra K; Bhattacharyya, Somnath

    2015-09-23

    We report nitrogen-induced enhanced electron tunnel transport and improved nanomechanical properties in band gap-modulated nitrogen doped DLC (N-DLC) quantum superlattice (QSL) structures. The electrical characteristics of such superlattice devices revealed negative differential resistance (NDR) behavior. The interpretation of these measurements is supported by 1D tight binding calculations of disordered superlattice structures (chains), which include bond alternation in sp(3)-hybridized regions. Tandem theoretical and experimental analysis shows improved tunnel transport, which can be ascribed to nitrogen-driven structural modification of the N-DLC QSL structures, especially the increased sp(2) clustering that provides additional conduction paths throughout the network. The introduction of nitrogen also improved the nanomechanical properties, resulting in enhanced elastic recovery, hardness, and elastic modulus, which is unusual but is most likely due to the onset of cross-linking of the network. Moreover, the materials' stress of N-DLC QSL structures was reduced with the nitrogen doping. In general, the combination of enhanced electron tunnel transport and nanomechanical properties in N-DLC QSL structures/devices can open a platform for the development of a new class of cost-effective and mechanically robust advanced electronic devices for a wide range of applications.

  5. Vibrational and spectroscopic investigation on the structure of 5H-dibenzo[b,f]azipine-5-carboxamide

    NASA Astrophysics Data System (ADS)

    Muthu, S.; Renuga, S.

    2013-10-01

    Fourier transform Raman and Fourier transform infrared spectra of 5H-dibenzo[b,f]azepine-5-carboxamide were recorded in the regions 4000-100 cm-1 and 4000-400 cm-1 respectively in the solid phase. 5H-dibenzo[b,f]azepine-5-carboxamide is typically used for the treatment of seizure disorders and neuropathic pain. The equilibrium geometry harmonic vibrational frequencies, infrared intensities and Raman scattering activities were calculated by density functional B3LYP/6-31G(d,p) method. A detailed interpretation of the vibrational spectra of this compound has been made on the basis of the calculated Potential energy distribution (PED). The thermodynamic functions of the title compound were also performed at the above methods and basis set. A detailed interpretation of the infrared and Raman spectra of 5H-dibenzo[b,f]azepine-5-carboxamide is reported. Stability of the molecule arising from hyper conjugative interactions, charge delocalization has been analyzed using natural bond orbital (NBO) analysis. The linear polarizability (α) and the first order hyperpolarizability (β) values of the investigated molecule have been computed using DFT quantum mechanical calculations. The calculated HOMO and LUMO energies show that charge transfer occurs within the molecule. The observed and calculated wave numbers are found to be in good agreement. The experimental spectra also coincide satisfactorily with those of theoretically constructed spectra.

  6. Implications of observed inconsistencies in carbonate chemistry measurements for ocean acidification studies

    NASA Astrophysics Data System (ADS)

    Hoppe, C. J. M.; Langer, G.; Rokitta, S. D.; Wolf-Gladrow, D. A.; Rost, B.

    2012-07-01

    The growing field of ocean acidification research is concerned with the investigation of organism responses to increasing pCO2 values. One important approach in this context is culture work using seawater with adjusted CO2 levels. As aqueous pCO2 is difficult to measure directly in small-scale experiments, it is generally calculated from two other measured parameters of the carbonate system (often AT, CT or pH). Unfortunately, the overall uncertainties of measured and subsequently calculated values are often unknown. Especially under high pCO2, this can become a severe problem with respect to the interpretation of physiological and ecological data. In the few datasets from ocean acidification research where all three of these parameters were measured, pCO2 values calculated from AT and CT are typically about 30% lower (i.e. ~300 μatm at a target pCO2 of 1000 μatm) than those calculated from AT and pH or CT and pH. This study presents and discusses these discrepancies as well as likely consequences for the ocean acidification community. Until this problem is solved, one has to consider that calculated parameters of the carbonate system (e.g. pCO2, calcite saturation state) may not be comparable between studies, and that this may have important implications for the interpretation of CO2 perturbation experiments.

  7. Implications of observed inconsistencies in carbonate chemistry measurements for ocean acidification studies

    NASA Astrophysics Data System (ADS)

    Hoppe, C. J. M.; Langer, G.; Rokitta, S. D.; Wolf-Gladrow, D. A.; Rost, B.

    2012-02-01

    The growing field of ocean acidification research is concerned with the investigation of organisms' responses to increasing pCO2 values. One important approach in this context is culture work using seawater with adjusted CO2 levels. As aqueous pCO2 is difficult to measure directly in small scale experiments, it is generally calculated from two other measured parameters of the carbonate system (often AT, CT or pH). Unfortunately, the overall uncertainties of measured and subsequently calculated values are often unknown. Especially under high pCO2, this can become a severe problem with respect to the interpretation of physiological and ecological data. In the few datasets from ocean acidification research where all three of these parameters were measured, pCO2 values calculated from AT and CT are typically about 30 % lower (i.e. ~300 μatm at a target pCO2 of 1000 μatm) than those calculated from AT and pH or CT and pH. This study presents and discusses these discrepancies as well as likely consequences for the ocean acidification community. Until this problem is solved, one has to consider that calculated parameters of the carbonate system (e.g. pCO2, calcite saturation state) may not be comparable between studies, and that this may have important implications for the interpretation of CO2 perturbation experiments.

  8. Mobile Learning Based Worked Example in Electric Circuit (WEIEC) Application to Improve the High School Students' Electric Circuits Interpretation Ability

    ERIC Educational Resources Information Center

    Yadiannur, Mitra; Supahar

    2017-01-01

    This research aims to determine the feasibility and effectivity of mobile learning based Worked Example in Electric Circuits (WEIEC) application in improving the high school students' electric circuits interpretation ability on Direct Current Circuits materials. The research method used was a combination of Four-D Models and ADDIE model. The…

  9. Improving ECG Competence in Medical Trainees in a UK District General Hospital

    PubMed Central

    McAloon, Christopher; Leach, Helen; Gill, Simrat; Aluwalia, Arun; Trevelyan, Jasper

    2014-01-01

    Background Competency in electrocardiogram (ECG) interpretation is central to undergraduate and postgraduate clinical training. Studies have demonstrated ECGs are interpreted sub-optimally. Our study compares the effectiveness of two learning strategies to improve competence and confidence. Method A 1-month prospective randomized study compared the strategies in two cohorts: undergraduate third year medical students and postgraduate foundation year one (FY1) doctors. Both had blinded randomization to one of these learning strategies: focused teaching program (FTP) and self-directed learning (SDL). All volunteers completed a confidence questionnaire before and after allocation learning strategy and an ECG recognition multiple choice question (MCQ) paper at the end of the learning period. Results The FTP group of undergraduates demonstrated a significant difference in successfully interpreting “ventricular tachycardia” (P = 0.046) and “narrow complex tachycardia” (P = 0.009) than the SDL group. Participant confidence increased in both learning strategies. FTP confidence demonstrated a greater improvement than SDL for both cohorts. Conclusion A dedicated teaching program can improve trainee confidence and competence in ECG interpretation. A larger benefit is observed in undergraduates and those undertaking a FTP. PMID:28392875

  10. A decision support system and rule-based algorithm to augment the human interpretation of the 12-lead electrocardiogram.

    PubMed

    Cairns, Andrew W; Bond, Raymond R; Finlay, Dewar D; Guldenring, Daniel; Badilini, Fabio; Libretti, Guido; Peace, Aaron J; Leslie, Stephen J

    The 12-lead Electrocardiogram (ECG) has been used to detect cardiac abnormalities in the same format for more than 70years. However, due to the complex nature of 12-lead ECG interpretation, there is a significant cognitive workload required from the interpreter. This complexity in ECG interpretation often leads to errors in diagnosis and subsequent treatment. We have previously reported on the development of an ECG interpretation support system designed to augment the human interpretation process. This computerised decision support system has been named 'Interactive Progressive based Interpretation' (IPI). In this study, a decision support algorithm was built into the IPI system to suggest potential diagnoses based on the interpreter's annotations of the 12-lead ECG. We hypothesise semi-automatic interpretation using a digital assistant can be an optimal man-machine model for ECG interpretation. To improve interpretation accuracy and reduce missed co-abnormalities. The Differential Diagnoses Algorithm (DDA) was developed using web technologies where diagnostic ECG criteria are defined in an open storage format, Javascript Object Notation (JSON), which is queried using a rule-based reasoning algorithm to suggest diagnoses. To test our hypothesis, a counterbalanced trial was designed where subjects interpreted ECGs using the conventional approach and using the IPI+DDA approach. A total of 375 interpretations were collected. The IPI+DDA approach was shown to improve diagnostic accuracy by 8.7% (although not statistically significant, p-value=0.1852), the IPI+DDA suggested the correct interpretation more often than the human interpreter in 7/10 cases (varying statistical significance). Human interpretation accuracy increased to 70% when seven suggestions were generated. Although results were not found to be statistically significant, we found; 1) our decision support tool increased the number of correct interpretations, 2) the DDA algorithm suggested the correct interpretation more often than humans, and 3) as many as 7 computerised diagnostic suggestions augmented human decision making in ECG interpretation. Statistical significance may be achieved by expanding sample size. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. On the interpretation of satellite-derived gravity and magnetic data for studies of crustal geology and metallogenesis

    NASA Technical Reports Server (NTRS)

    Hastings, D. A.

    1985-01-01

    Satellite-derived global gravity and magnetic maps have been shown to be useful in large-scale studies of the Earth's crust, despite the relative infancy of such studies. Numerous authors have made spatial associations of gravity or magnetic anomalies with geological provinces. Gravimetric interpretations are often made in terms of isostasy, regional variations of density, or of geodesy in general. Interpretations of satellite magnetic anomalies often base assumptions of overall crustal magnetism on concepts of the vertical and horizontal distribution of magnetic susceptibility, then make models of these assumed distributions. The opportunity of improving our satellite gravity and magnetic data through the proposed Geopotential Research Mission should considerably improve the scientific community's ability to analyze and interpret global magnetic and gravity data.

  12. Understanding the Effects of Cognitive Dissonance during Interpretation: Implications for "Hands-On" Programming.

    ERIC Educational Resources Information Center

    Morgan, Mark

    1996-01-01

    Describes a field experiment that was designed to test the effects of three different interpretive programs on students' attitudes toward live, nonpoisonous snakes. One of the treatments measured the effectiveness of using "hands-on" interpretive techniques. Direct contact with snakes improved students' attitudes but only slightly. Females'…

  13. Schlumberger soundings near Medicine Lake, California

    USGS Publications Warehouse

    Zohdy, A.A.R.; Bisdorf, R.J.

    1990-01-01

    The use of direct current resistivity soundings to explore the geothermal potential of the Medicine Lake area in northern California proved to be challenging because of high contact resistances and winding roads. Deep Schlumberger soundings were made by expanding current electrode spacings along the winding roads. Corrected sounding data were interpreted using an automatic interpretation method. Forty-two maps of interpreted resistivity were calculated for depths extending from 20 to 1000 m. Computer animation of these 42 maps revealed that: 1) certain subtle anomalies migrate laterallly with depth and can be traced to their origin, 2) an extensive volume of low-resistivity material underlies the survey area, and 3) the three areas (east of Bullseye Lake, southwest of Glass Mountain, and northwest of Medicine Lake) may be favorable geothermal targets. Six interpreted resistivity maps and three cross-sections illustrate the above findings. -from Authors

  14. [The interpreter in an intercultural clinical milieu].

    PubMed

    Vissandjée, B; Ntetu, A L; Courville, F; Breton, E R; Bourdeau, M

    1998-05-01

    The public's diversified language profile means that nursing practice must adjust to provide the same quality of care to all clients, no matter what language they speak. To improve quality and quantity of information exchanged in the nurse-client-interpreter triangle, the authors have investigated the type of information likely to be filtered and studied the various factors underlying the interpreter's choice to filter information. The authors also analyzed the values interpreters assign to information and the factors that form the background for filtering, including mistrust. The authors suggest adequately preparing interpreters; using interpreters' expertise; and developing an appropriate training program for intercultural interpreters to enable them to better function within health care institutions.

  15. [Smartphone application for blood gas interpretation].

    PubMed

    Obiols, Julien; Bardo, Pascale; Garnier, Jean-Pierre; Brouard, Benoît

    2013-01-01

    Ninety four per cent of health professionals use their smartphone for business purposes and more than 50% has medical applications. The «Blood Gas» application was created to be part of this dynamic and participate to e-health development in France. The «Blood Gas» application facilitates interpretation of the results of blood gas analysis using an algorithm developed with reference to a medical bibliography. It can detect some complex or intricate acid-base disorders in evaluating the effectiveness of the secondary response. The application also studied the respiratory status of the patient by calculating the PaO2/FiO2 ratio and the alveol-arterial gradient. It also indicates the presence of a shunt effect. Finally, a specific module to calculate the SID (strong ion difference) depending on the model of Stewart can detect complex acid-base disorders.

  16. Identified particle v2 and v4 in Au+Au collisions at √s_NN =62, 130 and 200 GeV

    NASA Astrophysics Data System (ADS)

    Bai, Yuting

    2004-10-01

    The measured large elliptic flow v2 is interpreted as an indication of early local equilibrium[1,2] and is relevant to interpretations involving a strongly interacting quark-gluon plasma phase. v4 is argued to be more sensitive than v2 to initial conditions in hydrodynamic calculations[3]. We will present identified particle v2 and v4 measurements at √s_NN = 62, 130 and 200 GeV. The comparisons to hydro calculations will be shown, and the energy dependence of v2 as a function of transverse momentum will be addressed and discussed. [1] H.Sorge, Phys. Rev. Lett. 78, 2309 (1997). [2] P.F.Kolb and U.Heinz, nucl-th/0305084. [3] P.F.Kolb, Phys. Rev. C 68,031902(2003).

  17. A primer on marginal effects-part II: health services research applications.

    PubMed

    Onukwugha, E; Bergtold, J; Jain, R

    2015-02-01

    Marginal analysis evaluates changes in a regression function associated with a unit change in a relevant variable. The primary statistic of marginal analysis is the marginal effect (ME). The ME facilitates the examination of outcomes for defined patient profiles or individuals while measuring the change in original units (e.g., costs, probabilities). The ME has a long history in economics; however, it is not widely used in health services research despite its flexibility and ability to provide unique insights. This article, the second in a two-part series, discusses practical issues that arise in the estimation and interpretation of the ME for a variety of regression models often used in health services research. Part one provided an overview of prior studies discussing ME followed by derivation of ME formulas for various regression models relevant for health services research studies examining costs and utilization. The current article illustrates the calculation and interpretation of ME in practice and discusses practical issues that arise during the implementation, including: understanding differences between software packages in terms of functionality available for calculating the ME and its confidence interval, interpretation of average marginal effect versus marginal effect at the mean, and the difference between ME and relative effects (e.g., odds ratio). Programming code to calculate ME using SAS, STATA, LIMDEP, and MATLAB are also provided. The illustration, discussion, and application of ME in this two-part series support the conduct of future studies applying the concept of marginal analysis.

  18. The Standard Deviation of Differential Index as an innovation diagnostic tool based on kinematic parameters for objective assessment of a upper limb motion pathology.

    PubMed

    Jurkojć, Jacek; Wodarski, Piotr; Michnik, Robert A; Bieniek, Andrzej; Gzik, Marek; Granek, Arkadiusz

    2017-01-01

    Indexing methods are very popular in terms of determining the degree of disability associated with motor dysfunctions. Currently, indexing methods dedicated to the upper limbs are not very popular, probably due to difficulties in their interpretation. This work presents the calculation algorithm of new SDDI index and the attempt is made to determine the level of physical dysfunction along with description of its kind, based on the interpretation of the calculation results of SDDI and PULMI indices. 23 healthy people (10 women and 13 men), which constituted a reference group, and a group of 3 people with mobility impairments participated in the tests. In order to examine possibilities of the utilization of the SDDI index the participants had to repetitively perform two selected rehabilitation movements of upper extremities. During the tests the kinematic value was registered using inertial motion analysis system MVN BIOMECH. The results of the test were collected in waveforms of 9 anatomical angles in 4 joints of upper extremities. Then, SDDI and PULMI indices were calculated for each person with mobility impairments. Next, the analysis was performed to check which abnormalities in upper extremity motion can influence the value of both indexes and interpretation of those indexes was shown. Joint analysis of the both indices provides information on whether the patient has correctly performed the set sequence of movement and enables the determination of possible irregularities in the performance of movement given.

  19. Can Natural Language Processing Improve the Efficiency of Vaccine Adverse Event Report Review?

    PubMed

    Baer, B; Nguyen, M; Woo, E J; Winiecki, S; Scott, J; Martin, D; Botsis, T; Ball, R

    2016-01-01

    Individual case review of spontaneous adverse event (AE) reports remains a cornerstone of medical product safety surveillance for industry and regulators. Previously we developed the Vaccine Adverse Event Text Miner (VaeTM) to offer automated information extraction and potentially accelerate the evaluation of large volumes of unstructured data and facilitate signal detection. To assess how the information extraction performed by VaeTM impacts the accuracy of a medical expert's review of the vaccine adverse event report. The "outcome of interest" (diagnosis, cause of death, second level diagnosis), "onset time," and "alternative explanations" (drug, medical and family history) for the adverse event were extracted from 1000 reports from the Vaccine Adverse Event Reporting System (VAERS) using the VaeTM system. We compared the human interpretation, by medical experts, of the VaeTM extracted data with their interpretation of the traditional full text reports for these three variables. Two experienced clinicians alternately reviewed text miner output and full text. A third clinician scored the match rate using a predefined algorithm; the proportion of matches and 95% confidence intervals (CI) were calculated. Review time per report was analyzed. Proportion of matches between the interpretation of the VaeTM extracted data, compared to the interpretation of the full text: 93% for outcome of interest (95% CI: 91-94%) and 78% for alternative explanation (95% CI: 75-81%). Extracted data on the time to onset was used in 14% of cases and was a match in 54% (95% CI: 46-63%) of those cases. When supported by structured time data from reports, the match for time to onset was 79% (95% CI: 76-81%). The extracted text averaged 136 (74%) fewer words, resulting in a mean reduction in review time of 50 (58%) seconds per report. Despite a 74% reduction in words, the clinical conclusion from VaeTM extracted data agreed with the full text in 93% and 78% of reports for the outcome of interest and alternative explanation, respectively. The limited amount of extracted time interval data indicates the need for further development of this feature. VaeTM may improve review efficiency, but further study is needed to determine if this level of agreement is sufficient for routine use.

  20. Access to hospital interpreter services for limited English proficient patients in New Jersey: a statewide evaluation.

    PubMed

    Flores, Glenn; Torres, Sylvia; Holmes, Linda Janet; Salas-Lopez, Debbie; Youdelman, Mara K; Tomany-Korman, Sandra C

    2008-05-01

    We surveyed New Jersey (NJ) hospitals to assess current language services and identify policy recommendations on meeting limited English proficiency (LEP) patients' needs. Survey with 37 questions regarding hospital/patient features, interpreter services, and resources/policies needed to provide quality interpreter services. Sixty-seven hospitals responded (55% response rate). Most NJ hospitals have no interpreter services department, 80% provide no staff training on working with interpreters, 31% lack multilingual signs, and 19% offer no written translation services. Only 3% of hospitals have full-time interpreters, a ratio of 1 interpreter:240,748 LEP NJ residents. Most hospitals stated third-party reimbursement for interpreters would be beneficial, by reducing costs, adding interpreters, meeting population growth, and improving communication. Most NJ hospitals have no full-time interpreters, interpreter services department, or staff training on working with interpreters, and deficiencies exist in hospital signage and translation services. Most NJ hospitals stated third-party reimbursement for interpreter services would be beneficial.

  1. Electronic states of Zn2 - Ab initio calculations of a prototype for Hg2

    NASA Technical Reports Server (NTRS)

    Hay, P. J.; Dunning, T. H., Jr.; Raffenetti, R. C.

    1976-01-01

    The electronic states of Zn2 are investigated by ab initio polarization configuration-interaction calculations. Molecular states dissociating to Zn(1S) + Zn(1S, 3P, 1P) and Zn(3P) + Zn(3P) are treated. Important effects from states arising from Zn(+)(25) + Zn(-)(2P) are found in the potential-energy curves and electronic-transition moments. A model calculation for Hg2 based on the Zn2 curves and including spin-orbit coupling leads to a new interpretation of the emission bands in Hg vapor.

  2. Polarimetry with multiple mirror telescopes

    NASA Technical Reports Server (NTRS)

    West, S. C.

    1986-01-01

    The polarizations of multiple mirror telescopes are calculated using Mueller calculus. It is found that the Multiple Mirror Telescope (MMT) produces a constant depolarization that is a function of wavelength and independent of sky position. The efficiency and crosstalk are modeled and experimentally verified. The two- and four-mirror new generation telescopes are found to produce sinusoidal depolarization for which an accurate interpretation of the incident Stokes vector requires inverse matrix calculations. Finally, the depolarization of f/1 paraboloids is calculated and found to be less than 0.1 percent at 3000 A.

  3. The excitation of electronic transverse energy levels in an intense magnetic field

    NASA Technical Reports Server (NTRS)

    Bussard, R. W.

    1978-01-01

    Observations of the X-ray pulsar Hercules X-1 show a line emission feature at about 60 keV, which has been interpreted as the fundamental electron cyclotron line in a magnetic field of around six trillion gauss. In this interpretation, the line radiation results from transitions between transverse energy levels, which are quantized by the field. The expected line luminosity from the excitation of these levels by protons which are falling into the polar cap of a neutron star are calculated. They are assumed to attain kinetic energies up to around 200 MeV, the gravitational potential energy at the surface. The cross sections for high energy Coulomb encounters between small pitch angle protons and electrons in a strong field are measured and used to calculate the energy loss rate of the infalling protons. This rate, together with the rate of elastic nuclear proton collisions, is then used to calculate the number of line photons an infalling proton can be expected to produce, directly or indirectly. The results are applied to Hercules X-1.

  4. An Evaluation of the Texas Functional Living Scale's Latent Structure and Subscales.

    PubMed

    González, David Andrés; Soble, Jason R; Marceaux, Janice C; McCoy, Karin J M

    2017-02-01

    Performance-based functional assessment is a critical component of neuropsychological practice. The Texas Functional Living Scale (TFLS) has promise given its brevity, nationally representative norms, and co-norming with Wechsler scales. However, its subscale structure has not been evaluated. The purpose of this study was to evaluate the TFLS in a mixed clinical sample (n = 197). Reliability and convergent and discriminant validity coefficients were calculated with neurocognitive testing and collateral reports and factor analysis was performed. The Money and Calculation subscale had the best psychometric properties of the subscales. The evidence did not support solitary interpretation of the Time subscale. A three-factor latent structure emerged representing memory and semantic retrieval, performance and visual scanning, and financial calculation. This study added psychometric support for interpretation of the TFLS total score and some of its subscales. Study limitations included sample characteristics (e.g., gender ratio) and low power for collateral report analyses. Published by Oxford University Press 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  5. Influence of Fluorescein Angiography on the Diagnosis and Management of Retinopathy of Prematurity

    PubMed Central

    Klufas, Michael A.; Patel, Samir N.; Ryan, Michael C.; Gupta, Mrinali Patel; Jonas, Karyn E.; Ostmo, Susan; Martinez-Castellanos, Maria Ana; Berrocal, Audina M.; Chiang, Michael F.; Chan, R.V. Paul

    2016-01-01

    Purpose To examine the influence of fluorescein angiography (FA) on the diagnosis and management of retinopathy of prematurity (ROP). Design Prospective cohort study. Participants Nine recognized ROP experts (3 pediatric ophthalmologists; 6 retina specialists) interpreted 32 sets (16 color fundus photographs; 16 color fundus photographs paired with the corresponding FAs) of wide-angle retinal images from infants with ROP. Methods All experts independently reviewed the 32 image sets on a secure web site and provided a diagnosis and management plan for the case presented, first based on color fundus photographs alone, and then by color fundus photographs and corresponding FA. Main Outcome Measures Sensitivity and specificity of the ROP diagnosis (zone, stage, plus disease, and category – i.e. no ROP, mild ROP, type-2 ROP, and treatment-requiring ROP) was calculated using a consensus reference standard diagnosis, determined from the diagnosis of the color fundus photographs by three experienced readers in combination with the clinical diagnosis based on ophthalmoscopic examination. The kappa statistic was used to analyze the average intergrader agreement among experts for the diagnosis of zone, stage, plus disease, and category. Results Addition of FA to color fundus photographs resulted in a significant improvement in sensitivity for diagnosis of stage 3 or worse disease (39.8% vs. 74.1%, P = 0.008), type-2 or worse ROP (69.4% vs. 86.8%, P = 0.013), and pre-plus or worse disease (50.5 vs. 62.6%, P = 0.031). There was a nonsignificant trend towards improved sensitivity for diagnosis of treatment-requiring ROP (22.2% vs. 40.3%, P = 0.063). Using the kappa statistic, addition of FA to color fundus photographs significantly improved intergrader agreement for diagnosis of treatment-requiring ROP. Addition of FA to color fundus photographs did not significantly affect intergrader agreement for the diagnosis of stage, zone, or plus disease. Conclusions Compared to color fundus photographs alone, fluorescein angiography may improve the sensitivity of diagnosis of ROP by experts, particularly for stage 3 disease. In addition, intergrader agreement for diagnosis of treatment-requiring ROP may improve with FA interpretation. PMID:26028345

  6. The prevention of mother-to-child transmission of HIV cascade analysis tool: supporting health managers to improve facility-level service delivery.

    PubMed

    Gimbel, Sarah; Voss, Joachim; Mercer, Mary Anne; Zierler, Brenda; Gloyd, Stephen; Coutinho, Maria de Joana; Floriano, Florencia; Cuembelo, Maria de Fatima; Einberg, Jennifer; Sherr, Kenneth

    2014-10-21

    The objective of the prevention of Mother-to-Child Transmission (pMTCT) cascade analysis tool is to provide frontline health managers at the facility level with the means to rapidly, independently and quantitatively track patient flows through the pMTCT cascade, and readily identify priority areas for clinic-level improvement interventions. Over a period of six months, five experienced maternal-child health managers and researchers iteratively adapted and tested this systems analysis tool for pMTCT services. They prioritized components of the pMTCT cascade for inclusion, disseminated multiple versions to 27 health managers and piloted it in five facilities. Process mapping techniques were used to chart PMTCT cascade steps in these five facilities, to document antenatal care attendance, HIV testing and counseling, provision of prophylactic anti-retrovirals, safe delivery, safe infant feeding, infant follow-up including HIV testing, and family planning, in order to obtain site-specific knowledge of service delivery. Seven pMTCT cascade steps were included in the Excel-based final tool. Prevalence calculations were incorporated as sub-headings under relevant steps. Cells not requiring data inputs were locked, wording was simplified and stepwise drop-offs and maximization functions were included at key steps along the cascade. While the drop off function allows health workers to rapidly assess how many patients were lost at each step, the maximization function details the additional people served if only one step improves to 100% capacity while others stay constant. Our experience suggests that adaptation of a cascade analysis tool for facility-level pMTCT services is feasible and appropriate as a starting point for discussions of where to implement improvement strategies. The resulting tool facilitates the engagement of frontline health workers and managers who fill out, interpret, apply the tool, and then follow up with quality improvement activities. Research on adoption, interpretation, and sustainability of this pMTCT cascade analysis tool by frontline health managers is needed. ClinicalTrials.gov NCT02023658, December 9, 2013.

  7. An active, collaborative approach to learning skills in flow cytometry.

    PubMed

    Fuller, Kathryn; Linden, Matthew D; Lee-Pullen, Tracey; Fragall, Clayton; Erber, Wendy N; Röhrig, Kimberley J

    2016-06-01

    Advances in science education research have the potential to improve the way students learn to perform scientific interpretations and understand science concepts. We developed active, collaborative activities to teach skills in manipulating flow cytometry data using FlowJo software. Undergraduate students were given compensated clinical flow cytometry listmode output (FCS) files and asked to design a gating strategy to diagnose patients with different hematological malignancies on the basis of their immunophenotype. A separate cohort of research trainees was given uncompensated data files on which they performed their own compensation, calculated the antibody staining index, designed a sequential gating strategy, and quantified rare immune cell subsets. Student engagement, confidence, and perceptions of flow cytometry were assessed using a survey. Competency against the learning outcomes was assessed by asking students to undertake tasks that required understanding of flow cytometry dot plot data and gating sequences. The active, collaborative approach allowed students to achieve learning outcomes not previously possible with traditional teaching formats, for example, having students design their own gating strategy, without forgoing essential outcomes such as the interpretation of dot plots. In undergraduate students, favorable perceptions of flow cytometry as a field and as a potential career choice were correlated with student confidence but not the ability to perform flow cytometry data analysis. We demonstrate that this new pedagogical approach to teaching flow cytometry is beneficial for student understanding and interpretation of complex concepts. It should be considered as a useful new method for incorporating complex data analysis tasks such as flow cytometry into curricula. Copyright © 2016 The American Physiological Society.

  8. A fast fully constrained geometric unmixing of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Zhou, Xin; Li, Xiao-run; Cui, Jian-tao; Zhao, Liao-ying; Zheng, Jun-peng

    2014-11-01

    A great challenge in hyperspectral image analysis is decomposing a mixed pixel into a collection of endmembers and their corresponding abundance fractions. This paper presents an improved implementation of Barycentric Coordinate approach to unmix hyperspectral images, integrating with the Most-Negative Remove Projection method to meet the abundance sum-to-one constraint (ASC) and abundance non-negativity constraint (ANC). The original barycentric coordinate approach interprets the endmember unmixing problem as a simplex volume ratio problem, which is solved by calculate the determinants of two augmented matrix. One consists of all the members and the other consist of the to-be-unmixed pixel and all the endmembers except for the one corresponding to the specific abundance that is to be estimated. In this paper, we first modified the algorithm of Barycentric Coordinate approach by bringing in the Matrix Determinant Lemma to simplify the unmixing process, which makes the calculation only contains linear matrix and vector operations. So, the matrix determinant calculation of every pixel, as the original algorithm did, is avoided. By the end of this step, the estimated abundance meet the ASC constraint. Then, the Most-Negative Remove Projection method is used to make the abundance fractions meet the full constraints. This algorithm is demonstrated both on synthetic and real images. The resulting algorithm yields the abundance maps that are similar to those obtained by FCLS, while the runtime is outperformed as its computational simplicity.

  9. Techniques for interpretation of geoid anomalies

    NASA Technical Reports Server (NTRS)

    Chapman, M. E.

    1979-01-01

    For purposes of geological interpretation, techniques are developed to compute directly the geoid anomaly over models of density within the earth. Ideal bodies such as line segments, vertical sheets, and rectangles are first used to calculate the geoid anomaly. Realistic bodies are modeled with formulas for two-dimensional polygons and three-dimensional polyhedra. By using Fourier transform methods the two-dimensional geoid is seen to be a filtered version of the gravity field, in which the long-wavelength components are magnified and the short-wavelength components diminished.

  10. PDC-bit performance under simulated borehole conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, E.E.; Azar, J.J.

    1993-09-01

    Laboratory drilling tests were used to investigate the effects of pressure on polycrystalline-diamond-compact (PDC) drill-bit performance. Catoosa shale core samples were drilled with PDC and roller-cone bits at up to 1,750-psi confining pressure. All tests were conducted in a controlled environment with a full-scale laboratory drilling system. Test results indicate, that under similar operating conditions, increases in confining pressure reduce PDC-bit performance as much as or more than conventional-rock-bit performance. Specific energy calculations indicate that a combination of rock strength, chip hold-down, and bit balling may have reduced performance. Quantifying the degree to which pressure reduces PDC-bit performance will helpmore » researchers interpret test results and improve bit designs and will help drilling engineers run PDC bits more effectively in the field.« less

  11. An asymptotic analysis of the logrank test.

    PubMed

    Strawderman, R L

    1997-01-01

    Asymptotic expansions for the null distribution of the logrank statistic and its distribution under local proportional hazards alternatives are developed in the case of iid observations. The results, which are derived from the work of Gu (1992) and Taniguchi (1992), are easy to interpret, and provide some theoretical justification for many behavioral characteristics of the logrank test that have been previously observed in simulation studies. We focus primarily upon (i) the inadequacy of the usual normal approximation under treatment group imbalance; and, (ii) the effects of treatment group imbalance on power and sample size calculations. A simple transformation of the logrank statistic is also derived based on results in Konishi (1991) and is found to substantially improve the standard normal approximation to its distribution under the null hypothesis of no survival difference when there is treatment group imbalance.

  12. Modeling {sup 15}N NMR chemical shift changes in protein backbone with pressure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    La Penna, Giovanni, E-mail: glapenna@iccom.cnr.it; Mori, Yoshiharu, E-mail: ymori@ims.ac.jp; Kitahara, Ryo, E-mail: ryo@ph.ritsumei.ac.jp

    2016-08-28

    Nitrogen chemical shift is a useful parameter for determining the backbone three-dimensional structure of proteins. Empirical models for fast calculation of N chemical shift are improving their reliability, but there are subtle effects that cannot be easily interpreted. Among these, the effects of slight changes in hydrogen bonds, both intramolecular and with water molecules in the solvent, are particularly difficult to predict. On the other hand, these hydrogen bonds are sensitive to changes in protein environment. In this work, the change of N chemical shift with pressure for backbone segments in the protein ubiquitin is correlated with the change inmore » the population of hydrogen bonds involving the backbone amide group. The different extent of interaction of protein backbone with the water molecules in the solvent is put in evidence.« less

  13. Correlation electron cyclotron emission diagnostic and improved calculation of turbulent temperature fluctuation levels on ASDEX Upgrade

    NASA Astrophysics Data System (ADS)

    Creely, A. J.; Freethy, S. J.; Burke, W. M.; Conway, G. D.; Leccacorvi, R.; Parkin, W. C.; Terry, D. R.; White, A. E.

    2018-05-01

    A newly upgraded correlation electron cyclotron emission (CECE) diagnostic has been installed on the ASDEX Upgrade tokamak and has begun to perform experimental measurements of electron temperature fluctuations. CECE diagnostics measure small amplitude electron temperature fluctuations by correlating closely spaced heterodyne radiometer channels. This upgrade expanded the system from six channels to thirty, allowing simultaneous measurement of fluctuation level radial profiles without repeat discharges, as well as opening up the possibility of measuring radial turbulent correlation lengths. Newly refined statistical techniques have been developed in order to accurately analyze the fluctuation data collected from the CECE system. This paper presents the hardware upgrades for this system and the analysis techniques used to interpret the raw data, as well as measurements of fluctuation spectra and fluctuation level radial profiles.

  14. A study on directional resistivity logging-while-drilling based on self-adaptive hp-FEM

    NASA Astrophysics Data System (ADS)

    Liu, Dejun; Li, Hui; Zhang, Yingying; Zhu, Gengxue; Ai, Qinghui

    2014-12-01

    Numerical simulation of resistivity logging-while-drilling (LWD) tool response provides guidance for designing novel logging instruments and interpreting real-time logging data. In this paper, based on self-adaptive hp-finite element method (hp-FEM) algorithm, we analyze LWD tool response against model parameters and briefly illustrate geosteering capabilities of directional resistivity LWD. Numerical simulation results indicate that the change of source spacing is of obvious influence on the investigation depth and detecting precision of resistivity LWD tool; the change of frequency can improve the resolution of low-resistivity formation and high-resistivity formation. The simulation results also indicate that the self-adaptive hp-FEM algorithm has good convergence speed and calculation accuracy to guide the geologic steering drilling and it is suitable to simulate the response of resistivity LWD tools.

  15. Effects of growth rate on structural property and adatom migration behaviors for growth of GaInNAs/GaAs (001) by molecular beam epitaxy

    NASA Astrophysics Data System (ADS)

    Li, Jingling; Gao, Peng; Zhang, Shuguang; Wen, Lei; Gao, Fangliang; Li, Guoqiang

    2018-03-01

    We have investigated the structural properties and the growth mode of GaInNAs films prepared at different growth rates (Rg) by molecular beam epitaxy. The crystalline structure is studied by high resolution X-ray diffraction, and the evolution of GaInNAs film surface morphologies is studied by atomic force microscopy. It is found that both the crystallinity and the surface roughness are improved by increasing Rg, and the change in the growth mode is attributed to the adatom migration behaviors particularly for In atoms, which is verified by elemental analysis. In addition, we have presented some theoretical calculation results related to the N adsorption energy to show the unique N migration behavior, which is instructive to interpret the growth mechanism of GaInNAs films.

  16. Monte Carlo simulations of medical imaging modalities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estes, G.P.

    Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computermore » power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.« less

  17. Synthesis, X-ray crystallography characterization, vibrational spectroscopic, molecular electrostatic potential maps, thermodynamic properties studies of N,N'-di(p-thiazole)formamidine.

    PubMed

    Rofouei, M K; Fereyduni, E; Sohrabi, N; Shamsipur, M; Attar Gharamaleki, J; Sundaraganesan, N

    2011-01-01

    In this work, we will report a combined experimental and theoretical study on molecular and vibrational structure of N,N'-di(p-thiazole)formamidine (DpTF). DpTF has been synthesized and characterized by elemental analysis, FT-IR, FT-Raman, 1H NMR, 13C NMR spectroscopy and X-ray single crystal diffraction. The FT-IR and FT-Raman spectra of DpTF were recorded in the solid phase. The optimized geometry was calculated by HF and B3LYP methods using 6-31G(d) basis set. The FT-IR and FT-Raman spectra of DpTF was calculated at the HF/B3LYP/6-31G(d) level and were interpreted in terms of potential energy distribution (PED) analysis. The scaled theoretical wavenumber showed very good agreement with the experimental values. A detailed interpretation of the infrared and Raman spectra of DpTF was reported. On the basis of vibrational analyses, the thermodynamic properties of the title compound at different temperatures have been calculated, revealing the correlations between Cp,m°, Sm°, Hm° and temperatures. Furthermore, molecular electrostatic potential maps (MESP) and total dipole moment properties of the compound have been calculated. Copyright © 2010 Elsevier B.V. All rights reserved.

  18. Students' Task Interpretation and Conceptual Understanding in an Electronics Laboratory

    ERIC Educational Resources Information Center

    Rivera-Reyes, Presentacion; Lawanto, Oenardi; Pate, Michael L.

    2017-01-01

    Task interpretation is a critical first step for students in the process of self-regulated learning, and a key determinant when they set goals in their learning and select strategies in assigned work. This paper focuses on the explicit and implicit aspects of task interpretation based on Hadwin's model. Laboratory activities improve students'…

  19. An Evaluation of the Effectiveness of National Park Service Interpretive Planning

    ERIC Educational Resources Information Center

    Wells, Marcella

    2008-01-01

    In 2005-2006, the National Park Service Office of Interpretive Planning at Harpers Ferry Center, in collaboration with the author, conducted an evaluation project to (a) assess the appropriateness and quality of specific elements of National Park Service (NPS) interpretive plans, (b) determine where improvements in planning might be made, and (c)…

  20. Improving the effectiveness of ecological site descriptions: General state-and-transition models and the Ecosystem Dynamics Interpretive Tool (EDIT)

    USGS Publications Warehouse

    Bestelmeyer, Brandon T.; Williamson, Jeb C.; Talbot, Curtis J.; Cates, Greg W.; Duniway, Michael C.; Brown, Joel R.

    2016-01-01

    State-and-transition models (STMs) are useful tools for management, but they can be difficult to use and have limited content.STMs created for groups of related ecological sites could simplify and improve their utility. The amount of information linked to models can be increased using tables that communicate management interpretations and important within-group variability.We created a new web-based information system (the Ecosystem Dynamics Interpretive Tool) to house STMs, associated tabular information, and other ecological site data and descriptors.Fewer, more informative, better organized, and easily accessible STMs should increase the accessibility of science information.

  1. Air quality as respiratory health indicator: a critical review.

    PubMed

    Moshammer, Hanns; Wallner, Peter

    2011-09-01

    As part of the European Public Health project IMCA II validity and practicability of "air pollution" as a respiratory health indicator were analyzed. The definitions of air quality as an indicator proposed by the WHO project ECOEHIS and by IMCA I were compared. The public availability of the necessary data was checked through access to web-based data-bases. Practicability and interpretation of the indicator were discussed with project partners and external experts. Air quality serves as a kind of benchmark for the good health-related environmental policy. In this sense, it is a relevant health indicator. Although air quality is not directly in the responsibility of health policy, its vital importance for the population's health should not be neglected. In principle, data is available to calculate this IMCA indicator for any chosen area in Europe. The indicator is relevant and informative, but calculation and interpretation need input from local expert knowledge. The European health policy is well advised to take air quality into account. To that end, an interdisciplinary approach is warranted. The proposed definition of air quality as a (respiratory) health indicator is workable, but correct interpretation depends on expert and local knowledge.

  2. Quantified Uncertainties in Comparative Life Cycle Assessment: What Can Be Concluded?

    PubMed Central

    2018-01-01

    Interpretation of comparative Life Cycle Assessment (LCA) results can be challenging in the presence of uncertainty. To aid in interpreting such results under the goal of any comparative LCA, we aim to provide guidance to practitioners by gaining insights into uncertainty-statistics methods (USMs). We review five USMs—discernibility analysis, impact category relevance, overlap area of probability distributions, null hypothesis significance testing (NHST), and modified NHST–and provide a common notation, terminology, and calculation platform. We further cross-compare all USMs by applying them to a case study on electric cars. USMs belong to a confirmatory or an exploratory statistics’ branch, each serving different purposes to practitioners. Results highlight that common uncertainties and the magnitude of differences per impact are key in offering reliable insights. Common uncertainties are particularly important as disregarding them can lead to incorrect recommendations. On the basis of these considerations, we recommend the modified NHST as a confirmatory USM. We also recommend discernibility analysis as an exploratory USM along with recommendations for its improvement, as it disregards the magnitude of the differences. While further research is necessary to support our conclusions, the results and supporting material provided can help LCA practitioners in delivering a more robust basis for decision-making. PMID:29406730

  3. Jets and Metastability in Quantum Mechanics and Quantum Field Theory

    NASA Astrophysics Data System (ADS)

    Farhi, David

    I give a high level overview of the state of particle physics in the introduction, accessible without any background in the field. I discuss improvements of theoretical and statistical methods used for collider physics. These include telescoping jets, a statistical method which was claimed to allow jet searches to increase their sensitivity by considering several interpretations of each event. We find that indeed multiple interpretations extend the power of searches, for both simple counting experiments and powerful multivariate fitting experiments, at least for h → bb¯ at the LHC. Then I propose a method for automation of background calculations using SCET by appropriating the technology of Monte Carlo generators such as MadGraph. In the third chapter I change gears and discuss the future of the universe. It has long been known that our pocket of the standard model is unstable; there is a lower-energy configuration in a remote part of the configuration space, to which our universe will, eventually, decay. While the timescales involved are on the order of 10400 years (depending on how exactly one counts) and thus of no immediate worry, I discuss the shortcomings of the standard methods and propose a more physically motivated derivation for the decay rate. I then make various observations about the structure of decays in quantum field theory.

  4. Examining statewide capacity for school health and mental health promotion: a post hoc application of a district capacity-building framework.

    PubMed

    Maras, Melissa A; Weston, Karen J; Blacksmith, Jennifer; Brophy, Chelsey

    2015-03-01

    Schools must possess a variety of capacities to effectively support comprehensive and coordinated school health promotion activities, and researchers have developed a district-level capacity-building framework specific to school health promotion. State-level school health coalitions often support such capacity-building efforts and should embed this work within a data-based, decision-making model. However, there is a lack of guidance for state school health coalitions on how they should collect and use data. This article uses a district-level capacity-building framework to interpret findings from a statewide coordinated school health needs/resource assessment in order to examine statewide capacity for school health promotion. Participants included school personnel (N = 643) from one state. Descriptive statistics were calculated for survey items, with further examination of subgroup differences among school administrators and nurses. Results were then interpreted via a post hoc application of a district-level capacity-building framework. Findings across districts revealed statewide strengths and gaps with regard to leadership and management capacities, internal and external supports, and an indicator of global capacity. Findings support the utility of using a common framework across local and state levels to align efforts and embed capacity-building activities within a data-driven, continuous improvement model. © 2014 Society for Public Health Education.

  5. Mechanical and optical response of [100] lithium fluoride to multi-megabar dynamic pressures

    NASA Astrophysics Data System (ADS)

    Davis, Jean-Paul; Knudson, Marcus D.; Shulenburger, Luke; Crockett, Scott D.

    2016-10-01

    An understanding of the mechanical and optical properties of lithium fluoride (LiF) is essential to its use as a transparent tamper and window for dynamic materials experiments. In order to improve models for this material, we applied iterative Lagrangian analysis to ten independent sets of data from magnetically driven planar shockless compression experiments on single crystal [100] LiF to pressures as high as 350 GPa. We found that the compression response disagreed with a prevalent tabular equation of state for LiF that is commonly used to interpret shockless compression experiments. We also present complementary data from ab initio calculations performed using the diffusion quantum Monte Carlo method. The agreement between these two data sets lends confidence to our interpretation. In order to aid in future experimental analysis, we have modified the tabular equation of state to match the new data. We have also extended knowledge of the optical properties of LiF via shock-compression and shockless compression experiments, refining the transmissibility limit, measuring the refractive index to ˜300 GPa, and confirming the nonlinear dependence of the refractive index on density. We present a new model for the refractive index of LiF that includes temperature dependence and describe a procedure for correcting apparent velocity to true velocity for dynamic compression experiments.

  6. Pharmacoeconomics of inhaled anesthetic agents: considerations for the pharmacist.

    PubMed

    Chernin, Eric L

    2004-10-15

    Types of economic analyses used for inhaled anesthetic agents, factors to consider in calculating the cost of inhaled anesthetics, limitations of pharmacoeconomic studies of these agents, and strategies for controlling inhaled anesthetic costs are discussed. Inhaled anesthetic agents comprise a substantial component of drug budgets. Calculation of the cost of administering an inhaled anesthetic should take into consideration the cost per mL, potency, waste, concentration and duration of gas delivery, fresh gas flow rate, molecular weight, and density. The use of newer inhaled anesthetic agents with low solubility in blood and tissue provides a more rapid recovery from anesthesia than older, more soluble agents, and also provides the same level of control of depth of anesthesia at a lower fresh gas flow rate and possibly a lower cost than older agents at a higher fresh gas flow rate. A more rapid recovery may facilitate fast-track recovery and yield cost savings if it allows the completion of additional surgical cases or allows a reduction in personnel overtime expenses. Interpretation of pharmacoeconomic studies of inhaled anesthetics requires an appreciation of the limitations in methodology and ability to extrapolate results from one setting to another. Pharmacists' efforts to reduce anesthetic waste and collaborate with anesthesiologists to improve the use of these agents can help contain costs, but improving scheduling and efficiency in the operating room has a greater potential to reduce operating room costs. Much can be done to control costs of anesthetic agents without compromising availability of these agents and patient care.

  7. Additivity rules using similarity models for chemical reactivity: calculation and interpretation of electrofugality and nucleofugality.

    PubMed

    Bentley, T William

    2006-08-25

    A recently proposed, multi-parameter correlation: log k (25 degrees C)=s(f) (Ef + Nf), where Ef is electrofugality and Nf is nucleofugality, for the substituent and solvent effects on the rate constants for solvolyses of benzhydryl and substituted benzhydryl substrates, is re-evaluated. A new formula (Ef=log k (RCl/EtOH/25 degrees C) -1.87), where RCl/EtOH refers to ethanolysis of chlorides, reproduces published values of Ef satisfactorily, avoids multi-parameter optimisations and provides additional values of Ef. From the formula for Ef, it is shown that the term (sfxEf) is compatible with the Hammett-Brown (rho+sigma+) equation for substituent effects. However, the previously published values of N(f) do not accurately account for solvent and leaving group effects (e.g. nucleofuge Cl or X), even for benzhydryl solvolyses; alternatively, if the more exact, two-parameter term, (sfxNf) is used, calculated effects are less accurate. A new formula (Nf=6.14 + log k(BX/any solvent/25 degrees C)), where BX refers to solvolysis of the parent benzhydryl as electrofuge, defines improved Nf values for benzhydryl substrates. The new formulae for Ef and Nf are consistent with an assumption that sf=1.00(,) and so improved correlations for benzhydryl substrates can be obtained from the additive formula: log k(RX/any solvent/25 degrees C)=(Ef + Nf). Possible extensions of this approach are also discussed.

  8. The effect of a broad activation energy distribution on deuteron spin-lattice relaxation.

    PubMed

    Ylinen, E E; Punkkinen, M; Birczyński, A; Lalowicz, Z T

    2015-10-01

    Deuteron NMR spectra and spin-lattice relaxation were studied experimentally in zeolite NaY(2.4) samples containing 100% or 200% of CD3OH or CD3OD molecules of the total coverage of Na atoms in the temperature range 20-150K. The activation energies describing the methyl and hydroxyl motions show broad distributions. The relaxation data were interpreted by improving a recent model (Stoch et al., 2013 [16]) in which the nonexponential relaxation curves are at first described by a sum of three exponentials with adjustable relaxation rates and weights. Then a broad distribution of activation energies (the mean activation energy A0 and the width σ) was assumed for each essentially different methyl and hydroxyl position. The correlation times were calculated from the Arrhenius equation (containing the pre-exponential factor τ0), individual relaxation rates computed and classified into three classes, and finally initial relaxation rates and weights for each class formed. These were compared with experimental data, motional parameters changed slightly and new improved rates and weights for each class calculated, etc. This method was improved by deriving for the deuterons of the A and E species methyl groups relaxation rates, which depend explicitly on the tunnel frequency ωt. The temperature dependence of ωt and of the low-temperature correlation time were obtained by using the solutions of the Mathieu equation for a threefold potential. These dependencies were included in the simulations and as the result sets of A0, σ and τ0 obtained, which describe the methyl and hydroxyl motions in different positions in zeolite. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Dentist-patient communication in the multilingual dental setting.

    PubMed

    Goldsmith, C; Slack-Smith, L; Davies, G

    2005-12-01

    Communication between dentists and patients can be exceptionally challenging when the patient and the dentist do not speak the same language, as is frequently the case in multicultural Australia. The aim of this study was to describe the issues involved in dealing with limited-English speaking patients in order to formulate recommendations on how to improve dental communication. A cross sectional study was performed using a postal survey to Australian Dental Association member dental practitioners in Western Australia. Responses were collated and data analysis was performed using SPSS 11.5 for Windows. Most respondents encounter language-related communication barriers weekly or monthly, and the most satisfactory method of communication is informal interpreters. Despite reporting satisfaction working with professional chairside interpreters or dental staff interpreters, most respondents did not use them. The most common alternative communication methods were diagrams and models. Endodontics and periodontics provided the greatest challenge in communication. Informed consent was reportedly compromised due to language barriers by 29 per cent of respondents. Recommendations to improve communication included access to interpretation services, dentist technique/attitude to communication and patient preparedness for English-speaking encounters. Many respondents do not utilize the preferential communication methods, creating a potential compromise to both informed consent and the patients' best interests. The use of professional interpreters is recommended, and discussion should be supplemented with means of non-verbal communication. Dentists require access to lists of multilingual dentists and greater awareness of interpretation services to improve multilingual dentist-patient communication.

  10. Can I Count on Getting Better? Association between Math Anxiety and Poorer Understanding of Medical Risk Reductions.

    PubMed

    Rolison, Jonathan J; Morsanyi, Kinga; O'Connor, Patrick A

    2016-10-01

    Lower numerical ability is associated with poorer understanding of health statistics, such as risk reductions of medical treatment. For many people, despite good numeracy skills, math provokes anxiety that impedes an ability to evaluate numerical information. Math-anxious individuals also report less confidence in their ability to perform math tasks. We hypothesized that, independent of objective numeracy, math anxiety would be associated with poorer responding and lower confidence when calculating risk reductions of medical treatments. Objective numeracy was assessed using an 11-item objective numeracy scale. A 13-item self-report scale was used to assess math anxiety. In experiment 1, participants were asked to interpret the baseline risk of disease and risk reductions associated with treatment options. Participants in experiment 2 were additionally provided a graphical display designed to facilitate the processing of math information and alleviate effects of math anxiety. Confidence ratings were provided on a 7-point scale. Individuals of higher objective numeracy were more likely to respond correctly to baseline risks and risk reductions associated with treatment options and were more confident in their interpretations. Individuals who scored high in math anxiety were instead less likely to correctly interpret the baseline risks and risk reductions and were less confident in their risk calculations as well as in their assessments of the effectiveness of treatment options. Math anxiety predicted confidence levels but not correct responding when controlling for objective numeracy. The graphical display was most effective in increasing confidence among math-anxious individuals. The findings suggest that math anxiety is associated with poorer medical risk interpretation but is more strongly related to confidence in interpretations. © The Author(s) 2015.

  11. Modeling and evaluation of the oil-spill emergency response capability based on linguistic variables.

    PubMed

    Kang, Jian; Zhang, Jixin; Bai, Yongqiang

    2016-12-15

    An evaluation of the oil-spill emergency response capability (OS-ERC) currently in place in modern marine management is required to prevent pollution and loss accidents. The objective of this paper is to develop a novel OS-ERC evaluation model, the importance of which stems from the current lack of integrated approaches for interpreting, ranking and assessing OS-ERC performance factors. In the first part of this paper, the factors influencing OS-ERC are analyzed and classified to generate a global evaluation index system. Then, a semantic tree is adopted to illustrate linguistic variables in the evaluation process, followed by the application of a combination of Fuzzy Cognitive Maps (FCM) and the Analytic Hierarchy Process (AHP) to construct and calculate the weight distribution. Finally, considering that the OS-ERC evaluation process is a complex system, a fuzzy comprehensive evaluation (FCE) is employed to calculate the OS-ERC level. The entire evaluation framework obtains the overall level of OS-ERC, and also highlights the potential major issues concerning OS-ERC, as well as expert opinions for improving the feasibility of oil-spill accident prevention and protection. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Assessing Principal Component Regression Prediction of Neurochemicals Detected with Fast-Scan Cyclic Voltammetry

    PubMed Central

    2011-01-01

    Principal component regression is a multivariate data analysis approach routinely used to predict neurochemical concentrations from in vivo fast-scan cyclic voltammetry measurements. This mathematical procedure can rapidly be employed with present day computer programming languages. Here, we evaluate several methods that can be used to evaluate and improve multivariate concentration determination. The cyclic voltammetric representation of the calculated regression vector is shown to be a valuable tool in determining whether the calculated multivariate model is chemically appropriate. The use of Cook’s distance successfully identified outliers contained within in vivo fast-scan cyclic voltammetry training sets. This work also presents the first direct interpretation of a residual color plot and demonstrated the effect of peak shifts on predicted dopamine concentrations. Finally, separate analyses of smaller increments of a single continuous measurement could not be concatenated without substantial error in the predicted neurochemical concentrations due to electrode drift. Taken together, these tools allow for the construction of more robust multivariate calibration models and provide the first approach to assess the predictive ability of a procedure that is inherently impossible to validate because of the lack of in vivo standards. PMID:21966586

  13. Practical considerations for obtaining high quality quantitative computed tomography data of the skeletal system.

    PubMed

    Troy, Karen L; Edwards, W Brent

    2018-05-01

    Quantitative CT (QCT) analysis involves the calculation of specific parameters such as bone volume and density from CT image data, and can be a powerful tool for understanding bone quality and quantity. However, without careful attention to detail during all steps of the acquisition and analysis process, data can be of poor- to unusable-quality. Good quality QCT for research requires meticulous attention to detail and standardization of all aspects of data collection and analysis to a degree that is uncommon in a clinical setting. Here, we review the literature to summarize practical and technical considerations for obtaining high quality QCT data, and provide examples of how each recommendation affects calculated variables. We also provide an overview of the QCT analysis technique to illustrate additional opportunities to improve data reproducibility and reliability. Key recommendations include: standardizing the scanner and data acquisition settings, minimizing image artifacts, selecting an appropriate reconstruction algorithm, and maximizing repeatability and objectivity during QCT analysis. The goal of the recommendations is to reduce potential sources of error throughout the analysis, from scan acquisition to the interpretation of results. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Assessing principal component regression prediction of neurochemicals detected with fast-scan cyclic voltammetry.

    PubMed

    Keithley, Richard B; Wightman, R Mark

    2011-06-07

    Principal component regression is a multivariate data analysis approach routinely used to predict neurochemical concentrations from in vivo fast-scan cyclic voltammetry measurements. This mathematical procedure can rapidly be employed with present day computer programming languages. Here, we evaluate several methods that can be used to evaluate and improve multivariate concentration determination. The cyclic voltammetric representation of the calculated regression vector is shown to be a valuable tool in determining whether the calculated multivariate model is chemically appropriate. The use of Cook's distance successfully identified outliers contained within in vivo fast-scan cyclic voltammetry training sets. This work also presents the first direct interpretation of a residual color plot and demonstrated the effect of peak shifts on predicted dopamine concentrations. Finally, separate analyses of smaller increments of a single continuous measurement could not be concatenated without substantial error in the predicted neurochemical concentrations due to electrode drift. Taken together, these tools allow for the construction of more robust multivariate calibration models and provide the first approach to assess the predictive ability of a procedure that is inherently impossible to validate because of the lack of in vivo standards.

  15. Proper expression of metabolizable energy in avian energetics

    USGS Publications Warehouse

    Miller, M.R.; Reinecke, K.J.

    1984-01-01

    We review metabolizable energy (ME) concepts and present evidence suggesting that the form of ME used for analyses of avian energetics can affect interpretation of results. Apparent ME (AME) is the most widely used measure of food energy available to birds. True ME (TME) differs from AME in recognizing fecal and urinary energy of nonfood origin as metabolized energy. Only AME values obtained from test birds fed at maintenance levels should be used for energy analyses. A practical assay for TME has shown that TME estimates are less sensitive than AME to variation in food intake. The TME assay may be particularly useful in studies of natural foods that are difficult to obtain in quantities large enough to supply test birds with maintenance requirements. Energy budgets calculated from existence metabolism should be expressed as kJ of AME and converted to food requirements with estimates of metabolizability given in kJ AME/g. Energy budgets calculated from multiples of basal metabolic rate (a component of maintenance energy), however, should be expressed as kJ of either TME or net energy depending on ambient temperature. Energy units should be stated explicitly to improve comparability and in some cases accuracy of energy analyses.

  16. Contribution of spontaneous improvement to placebo response in depression: a meta-analytic review.

    PubMed

    Rutherford, Bret R; Mori, Shoko; Sneed, Joel R; Pimontel, Monique A; Roose, Steven P

    2012-06-01

    It is unknown to what degree spontaneous improvement accounts for the large placebo response observed in antidepressant trials for Major Depressive Disorder (MDD). The purpose of this study was to estimate the spontaneous improvement observed in treatment-seeking individuals with acute MDD by determining the symptom change in depressed patients assigned to wait-list controls in psychotherapy studies. The databases PubMed and PsycINFO were searched to identify randomized, prospective studies randomizing outpatients to psychotherapy or a wait-list control condition for the treatment of acute MDD. Standardized effect sizes calculated from each identified study were aggregated in a meta-analysis to obtain a summary statistic for the change in depression scores during participation in a wait-list control. Ten trials enrolling 340 participants in wait-list control conditions were identified. The estimated effect size for the change in depression scores during wait-list control was 0.505 (95% CI 0.271-0.739, p < 0.001), representing an average improvement of 4 points on the Hamilton Rating Scale for Depression. Depressed patients acutely experience improvement even without treatment, but spontaneous improvement is unlikely to account for the magnitude of placebo response typically observed in antidepressant trials. These findings must be interpreted in light of the small number wait-list control participants available for analysis as well as certain methodological heterogeneity in the psychotherapy studies analyzed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Puzzle based teaching versus traditional instruction in electrocardiogram interpretation for medical students--a pilot study.

    PubMed

    Rubinstein, Jack; Dhoble, Abhijeet; Ferenchick, Gary

    2009-01-13

    Most medical professionals are expected to possess basic electrocardiogram (EKG) interpretation skills. But, published data suggests that residents' and physicians' EKG interpretation skills are suboptimal. Learning styles differ among medical students; individualization of teaching methods has been shown to be viable and may result in improved learning. Puzzles have been shown to facilitate learning in a relaxed environment. The objective of this study was to assess efficacy of teaching puzzle in EKG interpretation skills among medical students. This is a reader blinded crossover trial. Third year medical students from College of Human Medicine, Michigan State University participated in this study. Two groups (n = 9) received two traditional EKG interpretation skills lectures followed by a standardized exam and two extra sessions with the teaching puzzle and a different exam. Two other groups (n = 6) received identical courses and exams with the puzzle session first followed by the traditional teaching. EKG interpretation scores on final test were used as main outcome measure. The average score after only traditional teaching was 4.07 +/- 2.08 while after only the puzzle session was 4.04 +/- 2.36 (p = 0.97). The average improvement after the traditional session was followed up with a puzzle session was 2.53 +/- 1.94 while the average improvement after the puzzle session was followed with the traditional session was 2.08 +/- 1.73 (p = 0.67). The final EKG exam score for this cohort (n = 15) was 84.1 compared to 86.6 (p = 0.22) for a comparable sample of medical students (n = 15) at a different campus. Teaching EKG interpretation with puzzles is comparable to traditional teaching and may be particularly useful for certain subgroups of students. Puzzle session are more interactive and relaxing, and warrant further investigations on larger scale.

  18. Knowledge-base for interpretation of cerebrospinal fluid data patterns. Essentials in neurology and psychiatry.

    PubMed

    Reiber, Hansotto

    2016-06-01

    The physiological and biophysical knowledge base for interpretations of cerebrospinal fluid (CSF) data and reference ranges are essential for the clinical pathologist and neurochemist. With the popular description of the CSF flow dependent barrier function, the dynamics and concentration gradients of blood-derived, brain-derived and leptomeningeal proteins in CSF or the specificity-independent functions of B-lymphocytes in brain also the neurologist, psychiatrist, neurosurgeon as well as the neuropharmacologist may find essentials for diagnosis, research or development of therapies. This review may help to replace the outdated ideas like "leakage" models of the barriers, linear immunoglobulin Index Interpretations or CSF electrophoresis. Calculations, Interpretations and analytical pitfalls are described for albumin quotients, quantitation of immunoglobulin synthesis in Reibergrams, oligoclonal IgG, IgM analysis, the polyspecific ( MRZ- ) antibody reaction, the statistical treatment of CSF data and general quality assessment in the CSF laboratory. The diagnostic relevance is documented in an accompaning review.

  19. The Efficacy of Mammography Boot Camp to Improve the Performance of Radiologists

    PubMed Central

    Lee, Eun Hye; Jung, Seung Eun; Kim, You Me; Choi, Nami

    2014-01-01

    Objective To evaluate the efficacy of a mammography boot camp (MBC) to improve radiologists' performance in interpreting mammograms in the National Cancer Screening Program (NCSP) in Korea. Materials and Methods Between January and July of 2013, 141 radiologists were invited to a 3-day educational program composed of lectures and group practice readings using 250 digital mammography cases. The radiologists' performance in interpreting mammograms were evaluated using a pre- and post-camp test set of 25 cases validated prior to the camp by experienced breast radiologists. Factors affecting the radiologists' performance, including age, type of attending institution, and type of test set cases, were analyzed. Results The average scores of the pre- and post-camp tests were 56.0 ± 12.2 and 78.3 ± 9.2, respectively (p < 0.001). The post-camp test scores were higher than the pre-camp test scores for all age groups and all types of attending institutions (p < 0.001). The rate of incorrect answers in the post-camp test decreased compared to the pre-camp test for all suspicious cases, but not for negative cases (p > 0.05). Conclusion The MBC improves radiologists' performance in interpreting mammograms irrespective of age and type of attending institution. Improved interpretation is observed for suspicious cases, but not for negative cases. PMID:25246818

  20. Interpreting the Coulomb-field approximation for generalized-Born electrostatics using boundary-integral equation theory.

    PubMed

    Bardhan, Jaydeep P

    2008-10-14

    The importance of molecular electrostatic interactions in aqueous solution has motivated extensive research into physical models and numerical methods for their estimation. The computational costs associated with simulations that include many explicit water molecules have driven the development of implicit-solvent models, with generalized-Born (GB) models among the most popular of these. In this paper, we analyze a boundary-integral equation interpretation for the Coulomb-field approximation (CFA), which plays a central role in most GB models. This interpretation offers new insights into the nature of the CFA, which traditionally has been assessed using only a single point charge in the solute. The boundary-integral interpretation of the CFA allows the use of multiple point charges, or even continuous charge distributions, leading naturally to methods that eliminate the interpolation inaccuracies associated with the Still equation. This approach, which we call boundary-integral-based electrostatic estimation by the CFA (BIBEE/CFA), is most accurate when the molecular charge distribution generates a smooth normal displacement field at the solute-solvent boundary, and CFA-based GB methods perform similarly. Conversely, both methods are least accurate for charge distributions that give rise to rapidly varying or highly localized normal displacement fields. Supporting this analysis are comparisons of the reaction-potential matrices calculated using GB methods and boundary-element-method (BEM) simulations. An approximation similar to BIBEE/CFA exhibits complementary behavior, with superior accuracy for charge distributions that generate rapidly varying normal fields and poorer accuracy for distributions that produce smooth fields. This approximation, BIBEE by preconditioning (BIBEE/P), essentially generates initial guesses for preconditioned Krylov-subspace iterative BEMs. Thus, iterative refinement of the BIBEE/P results recovers the BEM solution; excellent agreement is obtained in only a few iterations. The boundary-integral-equation framework may also provide a means to derive rigorous results explaining how the empirical correction terms in many modern GB models significantly improve accuracy despite their simple analytical forms.

  1. The mediating role of insight for long-term improvements in psychodynamic therapy.

    PubMed

    Johansson, Paul; Høglend, Per; Ulberg, Randi; Amlo, Svein; Marble, Alice; Bøgwald, Kjell-Petter; Sørbye, Oystein; Sjaastad, Mary Cosgrove; Heyerdahl, Oscar

    2010-06-01

    According to psychoanalytic theory, interpretation of transference leads to increased insight that again leads to improved interpersonal functioning over time. In this study, we performed a full mediational analysis to test whether insight gained during treatment mediates the long-term effects of transference interpretation in dynamic psychotherapy. This study is a randomized clinical trial with a dismantling design. One hundred outpatients seeking psychotherapy for depression, anxiety, personality disorders, and interpersonal problems were randomly assigned to 1 year of weekly sessions of dynamic psychotherapy with transference interpretation or to the same type and duration of treatment with the same therapists but without the use of transference interpretation. Interpersonal functioning and insight were measured pretreatment, posttreatment, and 1 year and 3 years after treatment termination. Contrary to common expectation, patients with a life-long pattern of low quality of object relations and personality disorder pathology profited more from therapy with transference interpretation than from therapy with no transference interpretation. This long-term effect was mediated by an increase in the level of insight during treatment. Insight seems to be a key mechanism of change in dynamic psychotherapy. Our results bridge the gap between clinical theory and empirical research.

  2. eeDAP: An Evaluation Environment for Digital and Analog Pathology.

    PubMed

    Gallas, Brandon D; Cheng, Wei-Chung; Gavrielides, Marios A; Ivansky, Adam; Keay, Tyler; Wunderlich, Adam; Hipp, Jason; Hewitt, Stephen M

    2014-01-01

    The purpose of this work is to present a platform for designing and executing studies that compare pathologists interpreting histopathology of whole slide images (WSI) on a computer display to pathologists interpreting glass slides on an optical microscope. Here we present eeDAP, an evaluation environment for digital and analog pathology. The key element in eeDAP is the registration of the WSI to the glass slide. Registration is accomplished through computer control of the microscope stage and a camera mounted on the microscope that acquires images of the real time microscope view. Registration allows for the evaluation of the same regions of interest (ROIs) in both domains. This can reduce or eliminate disagreements that arise from pathologists interpreting different areas and focuses the comparison on image quality. We reduced the pathologist interpretation area from an entire glass slide (≈10-30 mm) 2 to small ROIs <(50 um) 2 . We also made possible the evaluation of individual cells. We summarize eeDAP's software and hardware and provide calculations and corresponding images of the microscope field of view and the ROIs extracted from the WSIs. These calculations help provide a sense of eeDAP's functionality and operating principles, while the images provide a sense of the look and feel of studies that can be conducted in the digital and analog domains. The eeDAP software can be downloaded from code.google.com (project: eeDAP) as Matlab source or as a precompiled stand-alone license-free application.

  3. Stacking fault energies and slip in nanocrystalline metals.

    PubMed

    Van Swygenhoven, H; Derlet, P M; Frøseth, A G

    2004-06-01

    The search for deformation mechanisms in nanocrystalline metals has profited from the use of molecular dynamics calculations. These simulations have revealed two possible mechanisms; grain boundary accommodation, and intragranular slip involving dislocation emission and absorption at grain boundaries. But the precise nature of the slip mechanism is the subject of considerable debate, and the limitations of the simulation technique need to be taken into consideration. Here we show, using molecular dynamics simulations, that the nature of slip in nanocrystalline metals cannot be described in terms of the absolute value of the stacking fault energy-a correct interpretation requires the generalized stacking fault energy curve, involving both stable and unstable stacking fault energies. The molecular dynamics technique does not at present allow for the determination of rate-limiting processes, so the use of our calculations in the interpretation of experiments has to be undertaken with care.

  4. UV-POSIT: Web-Based Tools for Rapid and Facile Structural Interpretation of Ultraviolet Photodissociation (UVPD) Mass Spectra

    NASA Astrophysics Data System (ADS)

    Rosenberg, Jake; Parker, W. Ryan; Cammarata, Michael B.; Brodbelt, Jennifer S.

    2018-04-01

    UV-POSIT (Ultraviolet Photodissociation Online Structure Interrogation Tools) is a suite of web-based tools designed to facilitate the rapid interpretation of data from native mass spectrometry experiments making use of 193 nm ultraviolet photodissociation (UVPD). The suite includes four separate utilities which assist in the calculation of fragment ion abundances as a function of backbone cleavage sites and sequence position; the localization of charge sites in intact proteins; the calculation of hydrogen elimination propensity for a-type fragment ions; and mass-offset searching of UVPD spectra to identify unknown modifications and assess false positive fragment identifications. UV-POSIT is implemented as a Python/Flask web application hosted at http://uv-posit.cm.utexas.edu. UV-POSIT is available under the MIT license, and the source code is available at https://github.com/jarosenb/UV_POSIT. [Figure not available: see fulltext.

  5. UV-POSIT: Web-Based Tools for Rapid and Facile Structural Interpretation of Ultraviolet Photodissociation (UVPD) Mass Spectra.

    PubMed

    Rosenberg, Jake; Parker, W Ryan; Cammarata, Michael B; Brodbelt, Jennifer S

    2018-06-01

    UV-POSIT (Ultraviolet Photodissociation Online Structure Interrogation Tools) is a suite of web-based tools designed to facilitate the rapid interpretation of data from native mass spectrometry experiments making use of 193 nm ultraviolet photodissociation (UVPD). The suite includes four separate utilities which assist in the calculation of fragment ion abundances as a function of backbone cleavage sites and sequence position; the localization of charge sites in intact proteins; the calculation of hydrogen elimination propensity for a-type fragment ions; and mass-offset searching of UVPD spectra to identify unknown modifications and assess false positive fragment identifications. UV-POSIT is implemented as a Python/Flask web application hosted at http://uv-posit.cm.utexas.edu . UV-POSIT is available under the MIT license, and the source code is available at https://github.com/jarosenb/UV_POSIT . Graphical Abstract.

  6. A geometrical interpretation of the 2n-th central difference

    NASA Technical Reports Server (NTRS)

    Tapia, R. A.

    1972-01-01

    Many algorithms used for data smoothing, data classification and error detection require the calculation of the distance from a point to the polynomial interpolating its 2n neighbors (n on each side). This computation, if performed naively, would require the solution of a system of equations and could create numerical problems. This note shows that if the data is equally spaced, then this calculation can be performed using a simple recursion formula.

  7. Interpretation of the Total Magnetic Field Anomalies Measured by the CHAMP Satellite Over a Part of Europe and the Pannonian Basin

    NASA Technical Reports Server (NTRS)

    Kis, K. I.; Taylor, Patrick T.; Wittmann, G.; Toronyi, B.; Puszta, S.

    2012-01-01

    In this study we interpret the magnetic anomalies at satellite altitude over a part of Europe and the Pannonian Basin. These anomalies are derived from the total magnetic measurements from the CHAMP satellite. The anomalies reduced to an elevation of 324 km. An inversion method is used to interpret the total magnetic anomalies over the Pannonian Basin. A three dimensional triangular model is used in the inversion. Two parameter distributions: Laplacian and Gaussian are investigated. The regularized inversion is numerically calculated with the Simplex and Simulated Annealing methods and the anomalous source is located in the upper crust. A probable source of the magnetization is due to the exsolution of the hematite-ilmenite minerals.

  8. Interpretation of searches for supersymmetry with simplified models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatrchyan, S.; Khachatryan, V.; Sirunyan, A. M.

    The results of searches for supersymmetry by the CMS experiment are interpreted in the framework of simplified models. The results are based on data corresponding to an integrated luminosity of 4.73 to 4.98 inverse femtobarns. The data were collected at the LHC in proton-proton collisions at a center-of-mass energy of 7 TeV. This paper describes the method of interpretation and provides upper limits on the product of the production cross section and branching fraction as a function of new particle masses for a number of simplified models. These limits and the corresponding experimental acceptance calculations can be used to constrainmore » other theoretical models and to compare different supersymmetry-inspired analyses.« less

  9. Advances in computer-aided well-test interpretation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horne, R.N.

    1994-07-01

    Despite the feeling expressed several times over the past 40 years that well-test analysis had reached it peak development, an examination of recent advances shows continuous expansion in capability, with future improvement likely. The expansion in interpretation capability over the past decade arose mainly from the development of computer-aided techniques, which, although introduced 20 years ago, have come into use only recently. The broad application of computer-aided interpretation originated with the improvement of the methodologies and continued with the expansion in computer access and capability that accompanied the explosive development of the microcomputer industry. This paper focuses on the differentmore » pieces of the methodology that combine to constitute a computer-aided interpretation and attempts to compare some of the approaches currently used. Future directions of the approach are also discussed. The separate areas discussed are deconvolution, pressure derivatives, model recognition, nonlinear regression, and confidence intervals.« less

  10. On the adequacy of modeling the concentration dependences of the activity coefficients for the components of solutions

    NASA Astrophysics Data System (ADS)

    Sergievskii, V. V.; Rudakov, A. M.

    2006-11-01

    An analysis of the accepted methods for calculating the activity coefficients for the components of binary aqueous solutions was performed. It was demonstrated that the use of the osmotic coefficients in auxiliary calculations decreases the accuracy of estimates of the activity coefficients. The possibility of calculating the activity coefficient of the solute from the concentration dependence of the water activity was examined. It was established that, for weak electrolytes, the interpretation of data on heterogeneous equilibria within the framework of the standard assumption that the dissociation is complete encounters serious difficulties.

  11. Hole-to-surface resistivity measurements.

    USGS Publications Warehouse

    Daniels, J.J.

    1983-01-01

    Hole-to-surface resistivity measurements over a layered volcanic tuff sequence illustrate procedures for gathering, reducing, and interpreting hole-to-surface resistivity data. The magnitude and direction of the total surface electric field resulting from a buried current source is calculated from orthogonal potential difference measurements for a grid of closely spaced stations. A contour map of these data provides a detailed map of the distribution of the electric field away from the drill hole. Resistivity anomalies can be enhanced by calculating the difference between apparent resistivities calculated from the total surface electric field and apparent resistivities for a layered earth model.-from Author

  12. Generation of a dynamo magnetic field in a protoplanetary accretion disk

    NASA Technical Reports Server (NTRS)

    Stepinski, T.; Levy, E. H.

    1987-01-01

    A new computational technique is developed that allows realistic calculations of dynamo magnetic field generation in disk geometries corresponding to protoplanetary and protostellar accretion disks. The approach is of sufficient generality to allow, in the future, a wide class of accretion disk problems to be solved. Here, basic modes of a disk dynamo are calculated. Spatially localized oscillatory states are found to occur in Keplerain disks. A physical interpretation is given that argues that spatially localized fields of the type found in these calculations constitute the basic modes of a Keplerian disk dynamo.

  13. Interpretation of the rainbow color scale for quantitative medical imaging: perceptually linear color calibration (CSDF) versus DICOM GSDF

    NASA Astrophysics Data System (ADS)

    Chesterman, Frédérique; Manssens, Hannah; Morel, Céline; Serrell, Guillaume; Piepers, Bastian; Kimpe, Tom

    2017-03-01

    Medical displays for primary diagnosis are calibrated to the DICOM GSDF1 but there is no accepted standard today that describes how display systems for medical modalities involving color should be calibrated. Recently the Color Standard Display Function3,4 (CSDF), a calibration using the CIEDE2000 color difference metric to make a display as perceptually linear as possible has been proposed. In this work we present the results of a first observer study set up to investigate the interpretation accuracy of a rainbow color scale when a medical display is calibrated to CSDF versus DICOM GSDF and a second observer study set up to investigate the detectability of color differences when a medical display is calibrated to CSDF, DICOM GSDF and sRGB. The results of the first study indicate that the error when interpreting a rainbow color scale is lower for CSDF than for DICOM GSDF with statistically significant difference (Mann-Whitney U test) for eight out of twelve observers. The results correspond to what is expected based on CIEDE2000 color differences between consecutive colors along the rainbow color scale for both calibrations. The results of the second study indicate a statistical significant improvement in detecting color differences when a display is calibrated to CSDF compared to DICOM GSDF and a (non-significant) trend indicating improved detection for CSDF compared to sRGB. To our knowledge this is the first work that shows the added value of a perceptual color calibration method (CSDF) in interpreting medical color images using the rainbow color scale. Improved interpretation of the rainbow color scale may be beneficial in the area of quantitative medical imaging (e.g. PET SUV, quantitative MRI and CT and doppler US), where a medical specialist needs to interpret quantitative medical data based on a color scale and/or detect subtle color differences and where improved interpretation accuracy and improved detection of color differences may contribute to a better diagnosis. Our results indicate that for diagnostic applications involving both grayscale and color images, CSDF should be chosen over DICOM GSDF and sRGB as it assures excellent detection for color images and at the same time maintains DICOM GSDF for grayscale images.

  14. Augmenting Amyloid PET Interpretations With Quantitative Information Improves Consistency of Early Amyloid Detection.

    PubMed

    Harn, Nicholas R; Hunt, Suzanne L; Hill, Jacqueline; Vidoni, Eric; Perry, Mark; Burns, Jeffrey M

    2017-08-01

    Establishing reliable methods for interpreting elevated cerebral amyloid-β plaque on PET scans is increasingly important for radiologists, as availability of PET imaging in clinical practice increases. We examined a 3-step method to detect plaque in cognitively normal older adults, focusing on the additive value of quantitative information during the PET scan interpretation process. Fifty-five F-florbetapir PET scans were evaluated by 3 experienced raters. Scans were first visually interpreted as having "elevated" or "nonelevated" plaque burden ("Visual Read"). Images were then processed using a standardized quantitative analysis software (MIMneuro) to generate whole brain and region of interest SUV ratios. This "Quantitative Read" was considered elevated if at least 2 of 6 regions of interest had an SUV ratio of more than 1.1. The final interpretation combined both visual and quantitative data together ("VisQ Read"). Cohen kappa values were assessed as a measure of interpretation agreement. Plaque was elevated in 25.5% to 29.1% of the 165 total Visual Reads. Interrater agreement was strong (kappa = 0.73-0.82) and consistent with reported values. Quantitative Reads were elevated in 45.5% of participants. Final VisQ Reads changed from initial Visual Reads in 16 interpretations (9.7%), with most changing from "nonelevated" Visual Reads to "elevated." These changed interpretations demonstrated lower plaque quantification than those initially read as "elevated" that remained unchanged. Interrater variability improved for VisQ Reads with the addition of quantitative information (kappa = 0.88-0.96). Inclusion of quantitative information increases consistency of PET scan interpretations for early detection of cerebral amyloid-β plaque accumulation.

  15. The Genetic Interpretation of Area under the ROC Curve in Genomic Profiling

    PubMed Central

    Wray, Naomi R.; Yang, Jian; Goddard, Michael E.; Visscher, Peter M.

    2010-01-01

    Genome-wide association studies in human populations have facilitated the creation of genomic profiles which combine the effects of many associated genetic variants to predict risk of disease. The area under the receiver operator characteristic (ROC) curve is a well established measure for determining the efficacy of tests in correctly classifying diseased and non-diseased individuals. We use quantitative genetics theory to provide insight into the genetic interpretation of the area under the ROC curve (AUC) when the test classifier is a predictor of genetic risk. Even when the proportion of genetic variance explained by the test is 100%, there is a maximum value for AUC that depends on the genetic epidemiology of the disease, i.e. either the sibling recurrence risk or heritability and disease prevalence. We derive an equation relating maximum AUC to heritability and disease prevalence. The expression can be reversed to calculate the proportion of genetic variance explained given AUC, disease prevalence, and heritability. We use published estimates of disease prevalence and sibling recurrence risk for 17 complex genetic diseases to calculate the proportion of genetic variance that a test must explain to achieve AUC = 0.75; this varied from 0.10 to 0.74. We provide a genetic interpretation of AUC for use with predictors of genetic risk based on genomic profiles. We provide a strategy to estimate proportion of genetic variance explained on the liability scale from estimates of AUC, disease prevalence, and heritability (or sibling recurrence risk) available as an online calculator. PMID:20195508

  16. Feeling Better at This Age? Investigating Three Explanations for Self-Rated Health Improvements Among the Oldest-Old.

    PubMed

    Vogelsang, Eric M

    2017-09-08

    Although the majority of individuals in their 80s or 90s do not experience improving health, a significant portion of this age group either (a) subjectively assess their health as improving; or (b) demonstrate self-rated health improvements when comparing consecutive surveys. While there is a body of research that examines self-rated health declines in older ages, much less work has studied possible determinants of self-rated health improvements. This is important, since there is increasing evidence that oldest-old adults have unique health evaluative processes that are not yet well-understood. Using 21,155 observations from eight waves of the Asset and Health Dynamics survey (the oldest-old portion of the Health and Retirement Study), I use hierarchical linear models to test three explanations as to why the oldest-old may report or demonstrate self-rated health improvements: (a) normalized pre-existing chronic conditions, (b) positive lifestyle changes, and (c) recovery from recent prior health shocks. Health improvements calculated by comparing consecutive surveys were related to a recovery from four particular serious health diagnoses (cancer, stroke, heart disease, and lung disease). Conversely, explicitly reported health improvements were associated with normalizing pre-existing conditions. Lastly, starting a regular exercise routine was related to both types of health improvements; while the cessation of negative health behaviors (i.e., drinking and smoking) was not related to either type. These results suggest that while subjective health "improvements" among the oldest-old may be a sign of successful aging, they should be interpreted critically and cautiously. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Exploring the Legacies of Filmed Patient Narratives

    PubMed Central

    Robert, Glenn; Maben, Jill

    2015-01-01

    We trace the legacies of filmed patient narratives that were edited and screened to encourage engagement with a participatory quality improvement project in an acute hospital setting in England. Using Gabriel’s theory of “narrative contract,” we examine the initial success of the films in establishing common grounds for participatory project and later, and more varied, interpretations of the films. Over time, the films were interpreted by staff as either useful sources of learning by critical reflection, dubious (invalid or unreliable) representations of patient experience, or as “closed” items available as auditable evidence of completed quality improvement work. We find these interpretations of the films to be shaped by the effect of social distance, the differential outcomes of project work, and changing organizational agendas. We consider the wider conditions of patient narrative as a form of quality improvement knowledge with immediate potency and fragile or fluid legitimacy over time. PMID:25576480

  18. Transference interpretations in dynamic psychotherapy: do they really yield sustained effects?

    PubMed

    Høglend, Per; Bøgwald, Kjell-Petter; Amlo, Svein; Marble, Alice; Ulberg, Randi; Sjaastad, Mary Cosgrove; Sørbye, Oystein; Heyerdahl, Oscar; Johansson, Paul

    2008-06-01

    Transference interpretation has remained a core ingredient in the psychodynamic tradition, despite limited empirical evidence for its effectiveness. In this study, the authors examined long-term effects of transference interpretations. This was a randomized controlled clinical trial, dismantling design, plus follow-up evaluations 1 year and 3 years after treatment termination. One hundred outpatients seeking psychotherapy for depression, anxiety, personality disorders, and interpersonal problems were referred to the study therapists. Patients were randomly assigned to receive weekly sessions of dynamic psychotherapy for 1 year with or without transference interpretations. Five full sessions from each therapy were rated in order to document treatment fidelity. Outcome variables were the Psychodynamic Functioning Scales (clinician rated) and the Inventory of Interpersonal Problems (self-report). Rating on the Quality of Object Relations Scale (lifelong pattern) and presence of a personality disorder were postulated moderators of treatment effects. Change over time was assessed using linear mixed models. Despite an absence of differential treatment efficacy, both treatments demonstrated significant improvement during treatment and also after treatment termination. However, patients with a lifelong pattern of poor object relations profited more from 1 year of therapy with transference interpretations than from therapy without transference interpretations. This effect was sustained throughout the 4-year study period. The goal of transference interpretation is sustained improvement of the patient's relationships outside of therapy. Transference interpretation seems to be especially important for patients with long-standing, more severe interpersonal problems.

  19. Understanding Innovation Adoption in the Air Force

    DTIC Science & Technology

    2006-03-01

    Innovation Management , 19: 110-132 (2002). Gleim, Joseph A. and Rosemary Gleim. “Calculating, Interpreting, and Reporting Chronbach’s Alpha...Calantone. “A Critical Look at Technological Innovation Typology and Innovativeness Terminology: A Literature Review,” The Journal of Product

  20. Additional Guidance for Evaluating and Calculating Degradation Kinetics in Environmental Media

    EPA Pesticide Factsheets

    EFED compiled examples where the PestDF (version 0.8.4), the tool used most commonly by USEPA to conduct kinetic analysis following the NAFTA guidance, results required additional interpretation. Here are some of these examples.

  1. Map Interpretation and Terrain Analysis Course (MITAC) for Infantrymen: Illustrated Lectures

    DTIC Science & Technology

    1982-01-01

    Factors Influencing Map Design . . . . . ..... ............ 4 Interpretation of Terrain Relief and Other Topographic Features...Institute (ARI) sponsored a project to design and develop a map interpretation and terrain analysis course (MITAC) to improve the ability of Army...helicopter pilots to navigate accurately when flying at nap-of-the-earth (NOE) altitudes (McGrath, 1975; McGrath & Foster, 1975). MITAC was designed to

  2. A Fast Full Tensor Gravity computation algorithm for High Resolution 3D Geologic Interpretations

    NASA Astrophysics Data System (ADS)

    Jayaram, V.; Crain, K.; Keller, G. R.

    2011-12-01

    We present an algorithm to rapidly calculate the vertical gravity and full tensor gravity (FTG) values due to a 3-D geologic model. This algorithm can be implemented on single, multi-core CPU and graphical processing units (GPU) architectures. Our technique is based on the line element approximation with a constant density within each grid cell. This type of parameterization is well suited for high-resolution elevation datasets with grid size typically in the range of 1m to 30m. The large high-resolution data grids in our studies employ a pre-filtered mipmap pyramid type representation for the grid data known as the Geometry clipmap. The clipmap was first introduced by Microsoft Research in 2004 to do fly-through terrain visualization. This method caches nested rectangular extents of down-sampled data layers in the pyramid to create view-dependent calculation scheme. Together with the simple grid structure, this allows the gravity to be computed conveniently on-the-fly, or stored in a highly compressed format. Neither of these capabilities has previously been available. Our approach can perform rapid calculations on large topographies including crustal-scale models derived from complex geologic interpretations. For example, we used a 1KM Sphere model consisting of 105000 cells at 10m resolution with 100000 gravity stations. The line element approach took less than 90 seconds to compute the FTG and vertical gravity on an Intel Core i7 CPU at 3.07 GHz utilizing just its single core. Also, unlike traditional gravity computational algorithms, the line-element approach can calculate gravity effects at locations interior or exterior to the model. The only condition that must be met is the observation point cannot be located directly above the line element. Therefore, we perform a location test and then apply appropriate formulation to those data points. We will present and compare the computational performance of the traditional prism method versus the line element approach on different CPU-GPU system configurations. The algorithm calculates the expected gravity at station locations where the observed gravity and FTG data were acquired. This algorithm can be used for all fast forward model calculations of 3D geologic interpretations for data from airborne, space and submarine gravity, and FTG instrumentation.

  3. Launching International Collaboration for Interpretation Research

    ERIC Educational Resources Information Center

    Shaw, Sherry

    2006-01-01

    The expansion of interpretation research projects across national boundaries contributes to improved personal, professional, and intellectual outcomes for researchers and practitioners. Establishing and maintaining these collaborative teams may be especially beneficial to strengthening the research agenda of new researchers. Conducting…

  4. Configuration of the magnetic field and reconstruction of Pangaea in the Permian period.

    PubMed

    Westphal, M

    1977-05-12

    The virtual geomagnetic poles of Laurasia and Gondwanaland in the Carboniferous and Permian periods diverge significantly when these continents are reassembled according to the fit calculated by Bullard et al. Two interpretations have been offered: Briden et al. explain these divergences by a magnetic field configuration very different from that of a geocentric axial dipole; Irving (and private communication), Van der Voo and French(4) suggest a different reconstruction and it is shown here that these two interpretations are not incompatible and that the first may help the second.

  5. Experimental design and reporting standards for improving the internal validity of pre-clinical studies in the field of pain: Consensus of the IMI-Europain consortium.

    PubMed

    Knopp, K L; Stenfors, C; Baastrup, C; Bannon, A W; Calvo, M; Caspani, O; Currie, G; Finnerup, N B; Huang, W; Kennedy, J D; Lefevre, I; Machin, I; Macleod, M; Rees, H; Rice, A S C; Rutten, K; Segerdahl, M; Serra, J; Wodarski, R; Berge, O-G; Treedef, R-D

    2017-12-29

    Background and aims Pain is a subjective experience, and as such, pre-clinical models of human pain are highly simplified representations of clinical features. These models are nevertheless critical for the delivery of novel analgesics for human pain, providing pharmacodynamic measurements of activity and, where possible, on-target confirmation of that activity. It has, however, been suggested that at least 50% of all pre-clinical data, independent of discipline, cannot be replicated. Additionally, the paucity of "negative" data in the public domain indicates a publication bias, and significantly impacts the interpretation of failed attempts to replicate published findings. Evidence suggests that systematic biases in experimental design and conduct and insufficiencies in reporting play significant roles in poor reproducibility across pre-clinical studies. It then follows that recommendations on how to improve these factors are warranted. Methods Members of Europain, a pain research consortium funded by the European Innovative Medicines Initiative (IMI), developed internal recommendations on how to improve the reliability of pre-clinical studies between laboratories. This guidance is focused on two aspects: experimental design and conduct, and study reporting. Results Minimum requirements for experimental design and conduct were agreed upon across the dimensions of animal characteristics, sample size calculations, inclusion and exclusion criteria, random allocation to groups, allocation concealment, and blinded assessment of outcome. Building upon the Animals in Research: Reportingin vivo Experiments (ARRIVE) guidelines, reporting standards were developed for pre-clinical studies of pain. These include specific recommendations for reporting on ethical issues, experimental design and conduct, and data analysis and interpretation. Key principles such as sample size calculation, a priori definition of a primary efficacy measure, randomization, allocation concealments, and blinding are discussed. In addition, considerations of how stress and normal rodent physiology impact outcome of analgesic drug studies are considered. Flow diagrams are standard requirements in all clinical trials, and flow diagrams for preclinical trials, which describe number of animals included/excluded, and reasons for exclusion are proposed. Creation of a trial registry for pre-clinical studies focused on drug development in order to estimate possible publication bias is discussed. Conclusions More systematic research is needed to analyze how inadequate internal validity and/or experimental bias may impact reproducibility across pre-clinical pain studies. Addressing the potential threats to internal validity and the sources of experimental biases, as well as increasing the transparency in reporting, are likely to improve preclinical research broadly by ensuring relevant progress is made in advancing the knowledge of chronic pain pathophysiology and identifying novel analgesics. Implications We are now disseminating these Europain processes for discussion in the wider pain research community. Any benefit from these guidelines will be dependent on acceptance and disciplined implementation across pre-clinical laboratories, funding agencies and journal editors, but it is anticipated that these guidelines will be a first step towards improving scientific rigor across the field of pre-clinical pain research.

  6. WQEP - a computer spreadsheet program to evaluate water quality data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liddle, R.G.

    1996-12-31

    A flexible spreadsheet Water Quality Evaluation Program (WQEP) has been developed for mining companies, consultants, and regulators to interpret the results of water quality sampling. In order properly to evaluate hydrologic data, unit conversions and chemical calculations are done, quality control checks are needed, and a complete and up-to-date listing of water quality standards is necessary. This process is time consuming and tends not to be done for every sample. This program speeds the process by allowing the input of up to 115 chemical parameters from one sample. WQEP compares concentrations with EPA primary and secondary drinking water MCLs ormore » MCLG, EPA warmwater and Coldwater acute and chronic aquatic life criteria, irrigation criteria, livestock criteria, EPA human health criteria, and several other categories of criteria. The spreadsheet allows the input of State or local water standards of interest. Water quality checks include: anion/cations, TDS{sub m}/TDS{sub c} (where m=measured and c=calculated), EC{sub m}/EC{sub c}, EC{sub m}/ion sums, TDS{sub c}/EC ratio, TDS{sub m}/EC, EC vs. alkalinity, two hardness values, and EC vs. {Sigma} cations. WQEP computes the dissolved transport index of 23 parameters, computes ratios of 26 species for trend analysis, calculates non-carbonate alkalinity to adjust the bicarbonate concentration, and calculates 35 interpretive formulas (pE, SAR, S.I., unionized ammonia, ionized sulfide HS-, pK{sub x} values, etc.). Fingerprinting is conducted by automatic generation of stiff diagrams and ion histograms. Mass loading calculations, mass balance calculations, conversions of concentrations, ionic strength, and the activity coefficient and chemical activity of 33 parameters is calculated. This program allows a speedy and thorough evaluation of water quality data from metal mines, coal mining, and natural surface water systems and has been tested against hand calculations.« less

  7. Reconstructing Tsunami Flow Speed from Sedimentary Deposits

    NASA Astrophysics Data System (ADS)

    Jaffe, B. E.; Gelfenbaum, G. R.

    2014-12-01

    Paleotsunami deposits contain information about the flow that created them that can be used to reconstruct tsunami flow speed and thereby improving assessment of tsunami hazard. We applied an inverse tsunami sediment transport model to sandy deposits near Sendai Airport, Japan, that formed during the 11 March 2011 Tohoku-oki tsunami to test model performance and explore the spatial variations in tsunami flow speed. The inverse model assumes the amount of suspended sediment in the water column is in equilibrium with local flow speed and that sediment transport convergences, primarily from bedload transport, do not contribute significantly to formation of the portion of the deposit we identify as formed by sediment settling out of suspension. We interpret massive or inversely graded intervals as forming from sediment transport convergences and do not model them. Sediment falling out of suspension forms a specific type of normal grading, termed 'suspension' grading, where the entire grain size distribution shifts to finer sizes higher up in a deposit. Suspension grading is often observed in deposits of high-energy flows, including turbidity currents and tsunamis. The inverse model calculates tsunami flow speed from the thickness and bulk grain size of a suspension-graded interval. We identified 24 suspension-graded intervals from 7 trenches located near the Sendai Airport from ~250-1350 m inland from the shoreline. Flow speeds were highest ~500 m from the shoreline, landward of the forested sand dunes where the tsunami encountered lower roughness in a low-lying area as it traveled downslope. Modeled tsunami flow speeds range from 2.2 to 9.0 m/s. Tsunami flow speeds are sensitive to roughness, which is unfortunately poorly constrained. Flow speed calculated by the inverse model was similar to those calculated from video taken from a helicopter about 1-2 km inland. Deposit reconstructions of suspension-graded intervals reproduced observed upward shifts in grain size distributions reasonably well. As approaches to estimating paleo-roughness improve, the flow speed and size of paleotsunamis will be better understood and the ability to assess tsunami hazard from paleotsunami deposits will improve.

  8. Accuracy assessment of vegetation community maps generated by aerial photography interpretation: perspective from the tropical savanna, Australia

    NASA Astrophysics Data System (ADS)

    Lewis, Donna L.; Phinn, Stuart

    2011-01-01

    Aerial photography interpretation is the most common mapping technique in the world. However, unlike an algorithm-based classification of satellite imagery, accuracy of aerial photography interpretation generated maps is rarely assessed. Vegetation communities covering an area of 530 km2 on Bullo River Station, Northern Territory, Australia, were mapped using an interpretation of 1:50,000 color aerial photography. Manual stereoscopic line-work was delineated at 1:10,000 and thematic maps generated at 1:25,000 and 1:100,000. Multivariate and intuitive analysis techniques were employed to identify 22 vegetation communities within the study area. The accuracy assessment was based on 50% of a field dataset collected over a 4 year period (2006 to 2009) and the remaining 50% of sites were used for map attribution. The overall accuracy and Kappa coefficient for both thematic maps was 66.67% and 0.63, respectively, calculated from standard error matrices. Our findings highlight the need for appropriate scales of mapping and accuracy assessment of aerial photography interpretation generated vegetation community maps.

  9. Increased Access to Professional Interpreters in the Hospital Improves Informed Consent for Patients with Limited English Proficiency.

    PubMed

    Lee, Jonathan S; Pérez-Stable, Eliseo J; Gregorich, Steven E; Crawford, Michael H; Green, Adrienne; Livaudais-Toman, Jennifer; Karliner, Leah S

    2017-08-01

    Language barriers disrupt communication and impede informed consent for patients with limited English proficiency (LEP) undergoing healthcare procedures. Effective interventions for this disparity remain unclear. Assess the impact of a bedside interpreter phone system intervention on informed consent for patients with LEP and compare outcomes to those of English speakers. Prospective, pre-post intervention implementation study using propensity analysis. Hospitalized patients undergoing invasive procedures on the cardiovascular, general surgery or orthopedic surgery floors. Installation of dual-handset interpreter phones at every bedside enabling 24-h immediate access to professional interpreters. Primary predictor: pre- vs. post-implementation group; secondary predictor: post-implementation patients with LEP vs. English speakers. Primary outcomes: three central informed consent elements, patient-reported understanding of the (1) reasons for and (2) risks of the procedure and (3) having had all questions answered. We considered consent adequately informed when all three elements were met. We enrolled 152 Chinese- and Spanish-speaking patients with LEP (84 pre- and 68 post-implementation) and 86 English speakers. Post-implementation (vs. pre-implementation) patients with LEP were more likely to meet criteria for adequately informed consent (54% vs. 29%, p = 0.001) and, after propensity score adjustment, had significantly higher odds of adequately informed consent (AOR 2.56; 95% CI, 1.15-5.72) as well as of each consent element individually. However, compared to post-implementation English speakers, post-implementation patients with LEP had significantly lower adjusted odds of adequately informed consent (AOR, 0.38; 95% CI, 0.16-0.91). A bedside interpreter phone system intervention to increase rapid access to professional interpreters was associated with improvements in patient-reported informed consent and should be considered by hospitals seeking to improve care for patients with LEP; however, these improvements did not eliminate the language-based disparity. Additional clinician educational interventions and more language-concordant care may be necessary for informed consent to equal that for English speakers.

  10. Pentaquarks with hidden charm as hadroquarkonia

    NASA Astrophysics Data System (ADS)

    Eides, Michael I.; Petrov, Victor Yu.; Polyakov, Maxim V.

    2018-01-01

    We consider hidden charm pentaquarks as hadroquarkonium states in a QCD inspired approach. Pentaquarks arise naturally as bound states of quarkonia excitations and ordinary baryons. The LHCb P_c(4450) pentaquark is interpreted as a ψ '-nucleon bound state with spin-parity J^P=3/2^-. The partial decay width Γ (P_c(4450)→ J/ψ +N)≈ 11 MeV is calculated and turned out to be in agreement with the experimental data for P_c(4450). The P_c(4450) pentaquark is predicted to be a member of one of the two almost degenerate hidden-charm baryon octets with spin-parities JP=1/2^-,3/2^-. The masses and decay widths of the octet pentaquarks are calculated. The widths are small and comparable with the width of the P_c(4450) pentaquark, and the masses of the octet pentaquarks satisfy the Gell-Mann-Okubo relation. Interpretation of pentaquarks as loosely bound Σ_c\\bar{D}^* and Σ_c^*\\bar{D}^* deuteronlike states is also considered. We determine quantum numbers of these bound states and calculate their masses in the one-pion exchange scenario. The hadroquarkonium and molecular approaches to exotic hadrons are compared and the relative advantages and drawbacks of each approach are discussed.

  11. Interpretation of mutation induction by accelerated heavy ions in bacteria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozubek, S.; Ryznar, L.; Horneck, G.

    In this report, a quantitative interpretation of mutation induction cross sections by heavy charged particles in bacterial cells is presented. The approach is based on the calculation of the fraction of energy deposited by indirect hits in the sensitive structure. In these events the particle does not pass through the sensitive volume, but this region is hit by {delta} rays. Four track structure models, developed by Katz, Chatterjee et al, Kiefer and Straaten and Kudryashov et al., respectively, were used for the calculations. With the latter two models, very good agreement of the calculations with experimental results on mutagenesis inmore » bacteria was obtained. Depending on the linear energy transfer (LET{infinity}) of the particles, two different modes of mutagenic action of heavy ions are distinguished: {open_quotes}{delta}-ray mutagenesis,{close_quotes} which is related to those radiation qualities that preferentially kill the cells in direct hits (LET{infinity} {ge} 100 keV/{mu}m), and {open_quotes}track core mutagenesis,{close_quotes} which arises from direct hits and is observed for lighter ions or ions with high energy (LET{infinity} {le} 100 keV/{mu}m). 37 refs., 6 figs., 1 tab.« less

  12. An experimental and theoretical investigation into the electronically excited states of para-benzoquinone

    NASA Astrophysics Data System (ADS)

    Jones, D. B.; Limão-Vieira, P.; Mendes, M.; Jones, N. C.; Hoffmann, S. V.; da Costa, R. F.; Varella, M. T. do N.; Bettega, M. H. F.; Blanco, F.; García, G.; Ingólfsson, O.; Lima, M. A. P.; Brunger, M. J.

    2017-05-01

    We report on a combination of experimental and theoretical investigations into the structure of electronically excited para-benzoquinone (pBQ). Here synchrotron photoabsorption measurements are reported over the 4.0-10.8 eV range. The higher resolution obtained reveals previously unresolved pBQ spectral features. Time-dependent density functional theory calculations are used to interpret the spectrum and resolve discrepancies relating to the interpretation of the Rydberg progressions. Electron-impact energy loss experiments are also reported. These are combined with elastic electron scattering cross section calculations performed within the framework of the independent atom model-screening corrected additivity rule plus interference (IAM-SCAR + I) method to derive differential cross sections for electronic excitation of key spectral bands. A generalized oscillator strength analysis is also performed, with the obtained results demonstrating that a cohesive and reliable quantum chemical structure and cross section framework has been established. Within this context, we also discuss some issues associated with the development of a minimal orbital basis for the single configuration interaction strategy to be used for our high-level low-energy electron scattering calculations that will be carried out as a subsequent step in this joint experimental and theoretical investigation.

  13. Velocity Models of the Sedimentary Cover and Acoustic Basement, Central Arctic

    NASA Astrophysics Data System (ADS)

    Bezumov, D. V.; Butsenko, V.

    2017-12-01

    As the part of the Russian Federation Application on the Extension of the outer limit of the continental shelf in the Arctic Ocean to the Commission for the limits of the continental shelf the regional 2D seismic reflection and sonobuoy data was obtained in 2011, 2012 and 2014 years. Structure and thickness of the sedimentary cover and acoustic basement of the Central Arctic ocean can be refined due to this data. "VNIIOkeangeologia" created a methodology for matching 2D velocity model of the sedimentary cover based on vertical velocity spectrum calculated from wide-angle reflection sonobuoy data and the results of ray tracing of reflected and refracted waves. Matched 2D velocity models of the sedimentary cover in the Russian part of the Arctic Ocean were computed along several seismic profiles (see Figure). Figure comments: a) vertical velocity spectrum calculated from wide-angle reflection sonobuoy data. RMS velocity curve was picked in accordance with interpreted MCS section. Interval velocities within sedimentary units are shown. Interval velocities from Seiswide model are shown in brackets.b) interpreted sonobuoy record with overlapping of time-distance curves calculated by ray-tracing modelling.c) final depth velocity model specified by means of Seiswide software.

  14. Surface Ligand Promotion of Carbon Dioxide Reduction through Stabilizing Chemisorbed Reactive Intermediates.

    PubMed

    Wang, Zhijiang; Wu, Lina; Sun, Kun; Chen, Ting; Jiang, Zhaohua; Cheng, Tao; Goddard, William A

    2018-05-23

    We have explored functionalizing metal catalysts with surface ligands as an approach to facilitate electrochemical carbon dioxide reduction reaction (CO 2 RR). To provide a molecular level understanding of the mechanism by which this enhancement occurs, we combine in situ spectroscopy analysis with an interpretation based on quantum mechanics (QM) calculations. We find that a surface ligand can play a critical role in stabilizing the chemisorbed CO 2 , which facilitates CO 2 activation and leads to a 0.3 V decrease in the overpotential for carbon monoxide (CO) formation. Moreover, the presence of the surface ligand leads to nearly exclusive CO production. At -0.6 V (versus reversible hydrogen electrode, RHE), CO is the only significant product with a faradic efficiency of 93% and a current density of 1.9 mA cm -2 . This improvement corresponds to 53-fold enhancement in turnover frequency compared with the Ag nanoparticles (NPs) without surface ligands.

  15. What's all the fuss about? facts and figures about bone marrow failure and conditions.

    PubMed

    Mukherjee, Sudipto; Sekeres, Mikkael A

    2012-12-01

    The epidemiology of bone marrow failure conditions is not well understood. Although several population-based studies conducted in the last two decades have generated a wealth of information, it is still very challenging to interpret disease incidence and prevalence, particularly due to changes in disease classification, misdiagnosis of patients, frequent underreporting and use of different referent populations to calculate rates. Despite these limitations, the available epidemiologic data have revealed significant ethnic, geographic and clinical differences in disease biology that have implications for prevention and treatment strategies. With advances made in targeted therapies facilitated by identification of molecular biomarkers and increased use of curative bone marrow transplantation approach, the natural history of these disease entities is already changing. The epidemiology of these diseases seems to be the next frontier as knowledge gained about the risk factors and pathobiologic correlates could significantly help in designing patient-specific therapies with improved outcomes.

  16. High Energy Electron Detectors on Sphinx

    NASA Astrophysics Data System (ADS)

    Thompson, J. R.; Porte, A.; Zucchini, F.; Calamy, H.; Auriel, G.; Coleman, P. L.; Bayol, F.; Lalle, B.; Krishnan, M.; Wilson, K.

    2008-11-01

    Z-pinch plasma radiation sources are used to dose test objects with K-shell (˜1-4keV) x-rays. The implosion physics can produce high energy electrons (> 50keV), which could distort interpretation of the soft x-ray effects. We describe the design and implementation of a diagnostic suite to characterize the electron environment of Al wire and Ar gas puff z-pinches on Sphinx. The design used ITS calculations to model detector response to both soft x-rays and electrons and help set upper bounds to the spurious electron flux. Strategies to discriminate between the known soft x-ray emission and the suspected electron flux will be discussed. H.Calamy et al, ``Use of microsecond current prepulse for dramatic improvements of wire array Z-pinch implosion,'' Phys Plasmas 15, 012701 (2008) J.A.Halbleib et al, ``ITS: the integrated TIGER series of electron/photon transport codes-Version 3.0,'' IEEE Trans on Nuclear Sci, 39, 1025 (1992)

  17. Single Top Production at Next-to-Leading Order in the Standard Model Effective Field Theory.

    PubMed

    Zhang, Cen

    2016-04-22

    Single top production processes at hadron colliders provide information on the relation between the top quark and the electroweak sector of the standard model. We compute the next-to-leading order QCD corrections to the three main production channels: t-channel, s-channel, and tW associated production, in the standard model including operators up to dimension six. The calculation can be matched to parton shower programs and can therefore be directly used in experimental analyses. The QCD corrections are found to significantly impact the extraction of the current limits on the operators, because both of an improved accuracy and a better precision of the theoretical predictions. In addition, the distributions of some of the key discriminating observables are modified in a nontrivial way, which could change the interpretation of measurements in terms of UV complete models.

  18. Computational aerodynamics requirements: The future role of the computer and the needs of the aerospace industry

    NASA Technical Reports Server (NTRS)

    Rubbert, P. E.

    1978-01-01

    The commercial airplane builder's viewpoint on the important issues involved in the development of improved computational aerodynamics tools such as powerful computers optimized for fluid flow problems is presented. The primary user of computational aerodynamics in a commercial aircraft company is the design engineer who is concerned with solving practical engineering problems. From his viewpoint, the development of program interfaces and pre-and post-processing capability for new computational methods is just as important as the algorithms and machine architecture. As more and more details of the entire flow field are computed, the visibility of the output data becomes a major problem which is then doubled when a design capability is added. The user must be able to see, understand, and interpret the results calculated. Enormous costs are expanded because of the need to work with programs having only primitive user interfaces.

  19. Comparing biomarker measurements to a normal range: when to use standard error of the mean (SEM) or standard deviation (SD) confidence intervals tests.

    PubMed

    Pleil, Joachim D

    2016-01-01

    This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.

  20. The energy spectrum and the optical absorption spectrum of C{sub 60} fullerene within the Hubbard model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silant’ev, A. V., E-mail: kvvant@rambler.ru

    2015-10-15

    Anticommutator Green’s functions and the energy spectrum of C{sub 60} fullerene are calculated in the approximation of static fluctuations within the Hubbard model. On the basis of this spectrum, an interpretation is proposed for the experimentally observed optical absorption bands of C{sub 60} fullerene. The parameters of C{sub 60} fullerene that characterize it within the Hubbard model are calculated by the optical absorption spectrum.

  1. Vibrational spectroscopy of resveratrol

    NASA Astrophysics Data System (ADS)

    Billes, Ferenc; Mohammed-Ziegler, Ildikó; Mikosch, Hans; Tyihák, Ernő

    2007-11-01

    In this article the authors deal with the experimental and theoretical interpretation of the vibrational spectra of trans-resveratrol (3,5,4'-trihydroxy- trans-stilbene) of diverse beneficial biological activity. Infrared and Raman spectra of the compound were recorded; density functional calculations were carried out resulting in the optimized geometry and several properties of the molecule. Based on the calculated force constants, a normal coordinate analysis yielded the character of the vibrational modes and the assignment of the measured spectral bands.

  2. High-resolution seismic imaging of the gas and gas hydrate system at Green Canyon 955 in the Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Haines, S. S.; Hart, P. E.; Collett, T. S.; Shedd, W. W.; Frye, M.

    2015-12-01

    High-resolution 2D seismic data acquired by the USGS in 2013 enable detailed characterization of the gas and gas hydrate system at lease block Green Canyon 955 (GC955) in the Gulf of Mexico, USA. Earlier studies, based on conventional industry 3D seismic data and logging-while-drilling (LWD) borehole data acquired in 2009, identified general aspects of the regional and local depositional setting along with two gas hydrate-bearing sand reservoirs and one layer containing fracture-filling gas hydrate within fine-grained sediments. These studies also highlighted a number of critical remaining questions. The 2013 high-resolution 2D data fill a significant gap in our previous understanding of the site by enabling interpretation of the complex system of faults and gas chimneys that provide conduits for gas flow and thus control the gas hydrate distribution observed in the LWD data. In addition, we have improved our understanding of the main channel/levee sand reservoir body, mapping in fine detail the levee sequences and the fault system that segments them into individual reservoirs. The 2013 data provide a rarely available high-resolution view of a levee reservoir package, with sequential levee deposits clearly imaged. Further, we can calculate the total gas hydrate resource present in the main reservoir body, refining earlier estimates. Based on the 2013 seismic data and assumptions derived from the LWD data, we estimate an in-place volume of 840 million cubic meters or 29 billion cubic feet of gas in the form of gas hydrate. Together, these interpretations provide a significantly improved understanding of the gas hydrate reservoirs and the gas migration system at GC955.

  3. Confined compressive strength analysis can improve PDC bit selection. [Polycrystalline Diamond Compact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fabain, R.T.

    1994-05-16

    A rock strength analysis program, through intensive log analysis, can quantify rock hardness in terms of confined compressive strength to identify intervals suited for drilling with polycrystalline diamond compact (PDC) bits. Additionally, knowing the confined compressive strength helps determine the optimum PDC bit for the intervals. Computing rock strength as confined compressive strength can more accurately characterize a rock's actual hardness downhole than other methods. the information can be used to improve bit selections and to help adjust drilling parameters to reduce drilling costs. Empirical data compiled from numerous field strength analyses have provided a guide to selecting PDC drillmore » bits. A computer analysis program has been developed to aid in PDC bit selection. The program more accurately defines rock hardness in terms of confined strength, which approximates the in situ rock hardness downhole. Unconfined compressive strength is rock hardness at atmospheric pressure. The program uses sonic and gamma ray logs as well as numerous input data from mud logs. Within the range of lithologies for which the program is valid, rock hardness can be determine with improved accuracy. The program's output is typically graphed in a log format displaying raw data traces from well logs, computer-interpreted lithology, the calculated values of confined compressive strength, and various optional rock mechanic outputs.« less

  4. Kinetic Requirements for the Measurement of Mesospheric Water Vapor at 6.8 (microns) under Non-LTE Conditions

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Mlynczak, Martin G.; Lopez-Puertas, Manuel; Russell, James M., III

    1999-01-01

    We present accuracy requirements for specific kinetic parameters used to calculate the populations and vibrational temperatures of the H2O(010) and H2O(020) states in the terrestrial mesosphere. The requirements are based on rigorous simulations of the retrieval of mesospheric water vapor profiles from measurements of water vapor infrared emission made by limb scanning instruments on orbiting satellites. Major improvements in the rate constants that describe vibration-to- vibration exchange between the H2O(010) and 02(1) states are required in addition to improved specification of the rate of quenching Of O2(1) by atomic oxygen (0). It is also necessary to more accurately determine the yield of vibrationally excited O2(l) resulting from ozone photolysis. A contemporary measurement of the rate of quenching of H2O(010) by N2 and O2 is also desirable. These rates are either highly uncertain or have never before been measured at atmospheric temperatures. The suggested improvements are necessary for the interpretation of water vapor emission measurements at 6.8 microns to be made from a new spaceflight experiment in less than 2 years. The approach to retrieving water vapor under non-LTE conditions is also presented.

  5. Minimizing Barriers in Learning for On-Call Radiology Residents-End-to-End Web-Based Resident Feedback System.

    PubMed

    Choi, Hailey H; Clark, Jennifer; Jay, Ann K; Filice, Ross W

    2018-02-01

    Feedback is an essential part of medical training, where trainees are provided with information regarding their performance and further directions for improvement. In diagnostic radiology, feedback entails a detailed review of the differences between the residents' preliminary interpretation and the attendings' final interpretation of imaging studies. While the on-call experience of independently interpreting complex cases is important to resident education, the more traditional synchronous "read-out" or joint review is impossible due to multiple constraints. Without an efficient method to compare reports, grade discrepancies, convey salient teaching points, and view images, valuable lessons in image interpretation and report construction are lost. We developed a streamlined web-based system, including report comparison and image viewing, to minimize barriers in asynchronous communication between attending radiologists and on-call residents. Our system provides real-time, end-to-end delivery of case-specific and user-specific feedback in a streamlined, easy-to-view format. We assessed quality improvement subjectively through surveys and objectively through participation metrics. Our web-based feedback system improved user satisfaction for both attending and resident radiologists, and increased attending participation, particularly with regards to cases where substantive discrepancies were identified.

  6. Addressing the coming radiology crisis-the Society for Computer Applications in Radiology transforming the radiological interpretation process (TRIP) initiative.

    PubMed

    Andriole, Katherine P; Morin, Richard L; Arenson, Ronald L; Carrino, John A; Erickson, Bradley J; Horii, Steven C; Piraino, David W; Reiner, Bruce I; Seibert, J Anthony; Siegel, Eliot

    2004-12-01

    The Society for Computer Applications in Radiology (SCAR) Transforming the Radiological Interpretation Process (TRIP) Initiative aims to spearhead research, education, and discovery of innovative solutions to address the problem of information and image data overload. The initiative will foster interdisciplinary research on technological, environmental and human factors to better manage and exploit the massive amounts of data. TRIP will focus on the following basic objectives: improving the efficiency of interpretation of large data sets, improving the timeliness and effectiveness of communication, and decreasing medical errors. The ultimate goal of the initiative is to improve the quality and safety of patient care. Interdisciplinary research into several broad areas will be necessary to make progress in managing the ever-increasing volume of data. The six concepts involved are human perception, image processing and computer-aided detection (CAD), visualization, navigation and usability, databases and integration, and evaluation and validation of methods and performance. The result of this transformation will affect several key processes in radiology, including image interpretation; communication of imaging results; workflow and efficiency within the health care enterprise; diagnostic accuracy and a reduction in medical errors; and, ultimately, the overall quality of care.

  7. Estimation of oxygen isotope in source water of tree-ring cellulose in Indonesia using tree-ring oxygen isotope model

    NASA Astrophysics Data System (ADS)

    Hisamochi, R.; Watanabe, Y.; Kurita, N.; Sano, M.; Nakatsuka, T.; Matsuo, M.; Yamamoto, H.; Sugiyama, J.; Tsuda, T.; Tagami, T.

    2016-12-01

    Oxygen isotope composition (δ18O) of tree-ring cellulose has been used as paleoclimate proxy because its origin is atmospheric precipitation. However, interpretation of tree-ring cellulose δ18O is not simple because source water of tree-ring cellulose (the water took up by tree) is not atmospheric precipitation but soil water or ground water in growing season, precisely. In this study, we investigate the relationship between source water of tree-ring cellulose and precipitation in order to improve interpretation of tree-ring cellulose δ18O as paleoclimate proxy. We collected ten teak (Tectona grandis) plantation samples in Java Island, Indonesia. Teak is deciduous tree and grows in rainy season. Samples were cut into annual rings after cellulose extraction. δ18O of individual rings were measured by TCEA-IRMS at the Research Institute of Humanity and Nature. We calculatedδ18O of source water by means of tree-ring oxygen isotope model and then comparedδ18O of source water and that of monthly atmospheric precipitation at Jakarta (GNIP; Global Network of isotopes in Precipitation). Source waterδ18O shows two types of significant correlation withδ18O in atmospheric precipitation. One is positive correlation withδ18O of atmospheric precipitation in previous rainy season. Another is negative correlation with δ18O of atmospheric precipitation in beginning of the growing season. The former indicates that soil water in growing season contains rainfall in previous rainy season and teak mainly takes it up. The latter is difficult to interpret. It may be related to soil moisutre in beginning of growing season.

  8. Improving the geological interpretation of magnetic and gravity satellite anomalies

    NASA Technical Reports Server (NTRS)

    Hinze, W. J.; Braile, L. W. (Principal Investigator); Vonfrese, R. R. B.

    1985-01-01

    Current limitations in the quantitative interpretation of satellite-elevation geopotential field data and magnetic anomaly data were investigated along with techniques to overcome them. A major result was the preparation of an improved scalar magnetic anomaly map of South America and adjacent marine areas directly from the original MAGSAT data. In addition, comparisons of South American and Euro-African data show a strong correlation of anomalies along the Atlantic rifted margins of the continents.

  9. ACHP | News

    Science.gov Websites

    Foundation Honored For Creation and Implementation of New Preserve America Historic-Educational Grants grant program to create and improve interpretive, educational, and visitor experiences on the nation's fund National Wildlife Refuge System interpretive and educational projects focusing on history and

  10. A Modern Approach to College Analytical Chemistry.

    ERIC Educational Resources Information Center

    Neman, R. L.

    1983-01-01

    Describes a course which emphasizes all facets of analytical chemistry, including sampling, preparation, interference removal, selection of methodology, measurement of a property, and calculation/interpretation of results. Includes special course features (such as cooperative agreement with an environmental protection center) and course…

  11. Evaluating Evidence for Conceptually Related Constructs Using Bivariate Correlations

    ERIC Educational Resources Information Center

    Swank, Jacqueline M.; Mullen, Patrick R.

    2017-01-01

    The article serves as a guide for researchers in developing evidence of validity using bivariate correlations, specifically construct validity. The authors outline the steps for calculating and interpreting bivariate correlations. Additionally, they provide an illustrative example and discuss the implications.

  12. Elastic strain and twist analysis of protein structural data and allostery of the transmembrane channel KcsA

    NASA Astrophysics Data System (ADS)

    Mitchell, Michael R.; Leibler, Stanislas

    2018-05-01

    The abundance of available static protein structural data makes the more effective analysis and interpretation of this data a valuable tool to supplement the experimental study of protein mechanics. Structural displacements can be difficult to analyze and interpret. Previously, we showed that strains provide a more natural and interpretable representation of protein deformations, revealing mechanical coupling between spatially distinct sites of allosteric proteins. Here, we demonstrate that other transformations of displacements yield additional insights. We calculate the divergence and curl of deformations of the transmembrane channel KcsA. Additionally, we introduce quantities analogous to bend, splay, and twist deformation energies of nematic liquid crystals. These transformations enable the decomposition of displacements into different modes of deformation, helping to characterize the type of deformation a protein undergoes. We apply these calculations to study the filter and gating regions of KcsA. We observe a continuous path of rotational deformations physically coupling these two regions, and, we propose, underlying the allosteric interaction between these regions. Bend, splay, and twist distinguish KcsA gate opening, filter opening, and filter-gate coupling, respectively. In general, physically meaningful representations of deformations (like strain, curl, bend, splay, and twist) can make testable predictions and yield insights into protein mechanics, augmenting experimental methods and more fully exploiting available structural data.

  13. [The point-digital interpretation and the choice of the dermatoglyphic patterns on human fingers for diagnostics of consanguineous relationship].

    PubMed

    Zvyagin, V N; Rakitin, V A; Fomina, E E

    The objective of the present study was the development of the point-digital model for the scaless interpretation of the dermatoglyphic papillary patterns on human fingers that would allow to comprehensively describe, in digital terms, the main characteristics of the traits and perform the quantitative assessment of the frequency of their inheritance. A specially developed computer program, D.glyphic. 7-14 was used to mark the dermatoglyphic patterns on the fingerprints obtained from 30 familial triplets (father + mother + child).The values of all the studied traits for kinship diagnostics were found by calculating the ratios of the sums of differences between the traits in the parent-parent pairs to those in the respective parent-child pairs. The algorithms for the point marking of the traits and reading out the digital information about them have been developed. The traditional dermatoglyphic patterns were selected and the novel ones applied for the use in the framework of the point-digital model for the interpretation of the for diagnostics of consanguineous relationship. The present experimental study has demonstrated the high level of inheritance of the selected traits and the possibility to develop the algorithms and computation techniques for the calculation of consanguineous relationship coefficients based on these traits.

  14. A flexible computational framework for detecting, characterizing, and interpreting statistical patterns of epistasis in genetic studies of human disease susceptibility.

    PubMed

    Moore, Jason H; Gilbert, Joshua C; Tsai, Chia-Ti; Chiang, Fu-Tien; Holden, Todd; Barney, Nate; White, Bill C

    2006-07-21

    Detecting, characterizing, and interpreting gene-gene interactions or epistasis in studies of human disease susceptibility is both a mathematical and a computational challenge. To address this problem, we have previously developed a multifactor dimensionality reduction (MDR) method for collapsing high-dimensional genetic data into a single dimension (i.e. constructive induction) thus permitting interactions to be detected in relatively small sample sizes. In this paper, we describe a comprehensive and flexible framework for detecting and interpreting gene-gene interactions that utilizes advances in information theory for selecting interesting single-nucleotide polymorphisms (SNPs), MDR for constructive induction, machine learning methods for classification, and finally graphical models for interpretation. We illustrate the usefulness of this strategy using artificial datasets simulated from several different two-locus and three-locus epistasis models. We show that the accuracy, sensitivity, specificity, and precision of a naïve Bayes classifier are significantly improved when SNPs are selected based on their information gain (i.e. class entropy removed) and reduced to a single attribute using MDR. We then apply this strategy to detecting, characterizing, and interpreting epistatic models in a genetic study (n = 500) of atrial fibrillation and show that both classification and model interpretation are significantly improved.

  15. Advances in Landslide Hazard Forecasting: Evaluation of Global and Regional Modeling Approach

    NASA Technical Reports Server (NTRS)

    Kirschbaum, Dalia B.; Adler, Robert; Hone, Yang; Kumar, Sujay; Peters-Lidard, Christa; Lerner-Lam, Arthur

    2010-01-01

    A prototype global satellite-based landslide hazard algorithm has been developed to identify areas that exhibit a high potential for landslide activity by combining a calculation of landslide susceptibility with satellite-derived rainfall estimates. A recent evaluation of this algorithm framework found that while this tool represents an important first step in larger-scale landslide forecasting efforts, it requires several modifications before it can be fully realized as an operational tool. The evaluation finds that the landslide forecasting may be more feasible at a regional scale. This study draws upon a prior work's recommendations to develop a new approach for considering landslide susceptibility and forecasting at the regional scale. This case study uses a database of landslides triggered by Hurricane Mitch in 1998 over four countries in Central America: Guatemala, Honduras, EI Salvador and Nicaragua. A regional susceptibility map is calculated from satellite and surface datasets using a statistical methodology. The susceptibility map is tested with a regional rainfall intensity-duration triggering relationship and results are compared to global algorithm framework for the Hurricane Mitch event. The statistical results suggest that this regional investigation provides one plausible way to approach some of the data and resolution issues identified in the global assessment, providing more realistic landslide forecasts for this case study. Evaluation of landslide hazards for this extreme event helps to identify several potential improvements of the algorithm framework, but also highlights several remaining challenges for the algorithm assessment, transferability and performance accuracy. Evaluation challenges include representation errors from comparing susceptibility maps of different spatial resolutions, biases in event-based landslide inventory data, and limited nonlandslide event data for more comprehensive evaluation. Additional factors that may improve algorithm performance accuracy include incorporating additional triggering factors such as tectonic activity, anthropogenic impacts and soil moisture into the algorithm calculation. Despite these limitations, the methodology presented in this regional evaluation is both straightforward to calculate and easy to interpret, making results transferable between regions and allowing findings to be placed within an inter-comparison framework. The regional algorithm scenario represents an important step in advancing regional and global-scale landslide hazard assessment and forecasting.

  16. Real-time endoscopic image orientation correction system using an accelerometer and gyrosensor.

    PubMed

    Lee, Hyung-Chul; Jung, Chul-Woo; Kim, Hee Chan

    2017-01-01

    The discrepancy between spatial orientations of an endoscopic image and a physician's working environment can make it difficult to interpret endoscopic images. In this study, we developed and evaluated a device that corrects the endoscopic image orientation using an accelerometer and gyrosensor. The acceleration of gravity and angular velocity were retrieved from the accelerometer and gyrosensor attached to the handle of the endoscope. The rotational angle of the endoscope handle was calculated using a Kalman filter with transmission delay compensation. Technical evaluation of the orientation correction system was performed using a camera by comparing the optical rotational angle from the captured image with the rotational angle calculated from the sensor outputs. For the clinical utility test, fifteen anesthesiology residents performed a video endoscopic examination of an airway model with and without using the orientation correction system. The participants reported numbers written on papers placed at the left main, right main, and right upper bronchi of the airway model. The correctness and the total time it took participants to report the numbers were recorded. During the technical evaluation, errors in the calculated rotational angle were less than 5 degrees. In the clinical utility test, there was a significant time reduction when using the orientation correction system compared with not using the system (median, 52 vs. 76 seconds; P = .012). In this study, we developed a real-time endoscopic image orientation correction system, which significantly improved physician performance during a video endoscopic exam.

  17. An algorithm for U-Pb isotope dilution data reduction and uncertainty propagation

    NASA Astrophysics Data System (ADS)

    McLean, N. M.; Bowring, J. F.; Bowring, S. A.

    2011-06-01

    High-precision U-Pb geochronology by isotope dilution-thermal ionization mass spectrometry is integral to a variety of Earth science disciplines, but its ultimate resolving power is quantified by the uncertainties of calculated U-Pb dates. As analytical techniques have advanced, formerly small sources of uncertainty are increasingly important, and thus previous simplifications for data reduction and uncertainty propagation are no longer valid. Although notable previous efforts have treated propagation of correlated uncertainties for the U-Pb system, the equations, uncertainties, and correlations have been limited in number and subject to simplification during propagation through intermediary calculations. We derive and present a transparent U-Pb data reduction algorithm that transforms raw isotopic data and measured or assumed laboratory parameters into the isotopic ratios and dates geochronologists interpret without making assumptions about the relative size of sample components. To propagate uncertainties and their correlations, we describe, in detail, a linear algebraic algorithm that incorporates all input uncertainties and correlations without limiting or simplifying covariance terms to propagate them though intermediate calculations. Finally, a weighted mean algorithm is presented that utilizes matrix elements from the uncertainty propagation algorithm to propagate random and systematic uncertainties for data comparison between other U-Pb labs and other geochronometers. The linear uncertainty propagation algorithms are verified with Monte Carlo simulations of several typical analyses. We propose that our algorithms be considered by the community for implementation to improve the collaborative science envisioned by the EARTHTIME initiative.

  18. Research on Coordinate Transformation Method of Gb-Sar Image Supported by 3d Laser Scanning Technology

    NASA Astrophysics Data System (ADS)

    Wang, P.; Xing, C.

    2018-04-01

    In the image plane of GB-SAR, identification of deformation distribution is usually carried out by artificial interpretation. This method requires analysts to have adequate experience of radar imaging and target recognition, otherwise it can easily cause false recognition of deformation target or region. Therefore, it is very meaningful to connect two-dimensional (2D) plane coordinate system with the common three-dimensional (3D) terrain coordinate system. To improve the global accuracy and reliability of the transformation from 2D coordinates of GB-SAR images to local 3D coordinates, and overcome the limitation of traditional similarity transformation parameter estimation method, 3D laser scanning data is used to assist the transformation of GB-SAR image coordinates. A straight line fitting method for calculating horizontal angle was proposed in this paper. After projection into a consistent imaging plane, we can calculate horizontal rotation angle by using the linear characteristics of the structure in radar image and the 3D coordinate system. Aided by external elevation information by 3D laser scanning technology, we completed the matching of point clouds and pixels on the projection plane according to the geometric projection principle of GB-SAR imaging realizing the transformation calculation of GB-SAR image coordinates to local 3D coordinates. Finally, the effectiveness of the method is verified by the GB-SAR deformation monitoring experiment on the high slope of Geheyan dam.

  19. Accurate ECG diagnosis of atrial tachyarrhythmias using quantitative analysis: a prospective diagnostic and cost-effectiveness study.

    PubMed

    Krummen, David E; Patel, Mitul; Nguyen, Hong; Ho, Gordon; Kazi, Dhruv S; Clopton, Paul; Holland, Marian C; Greenberg, Scott L; Feld, Gregory K; Faddis, Mitchell N; Narayan, Sanjiv M

    2010-11-01

    Quantitative ECG Analysis. Optimal atrial tachyarrhythmia management is facilitated by accurate electrocardiogram interpretation, yet typical atrial flutter (AFl) may present without sawtooth F-waves or RR regularity, and atrial fibrillation (AF) may be difficult to separate from atypical AFl or rapid focal atrial tachycardia (AT). We analyzed whether improved diagnostic accuracy using a validated analysis tool significantly impacts costs and patient care. We performed a prospective, blinded, multicenter study using a novel quantitative computerized algorithm to identify atrial tachyarrhythmia mechanism from the surface ECG in patients referred for electrophysiology study (EPS). In 122 consecutive patients (age 60 ± 12 years) referred for EPS, 91 sustained atrial tachyarrhythmias were studied. ECGs were also interpreted by 9 physicians from 3 specialties for comparison and to allow healthcare system modeling. Diagnostic accuracy was compared to the diagnosis at EPS. A Markov model was used to estimate the impact of improved arrhythmia diagnosis. We found 13% of typical AFl ECGs had neither sawtooth flutter waves nor RR regularity, and were misdiagnosed by the majority of clinicians (0/6 correctly diagnosed by consensus visual interpretation) but correctly by quantitative analysis in 83% (5/6, P = 0.03). AF diagnosis was also improved through use of the algorithm (92%) versus visual interpretation (primary care: 76%, P < 0.01). Economically, we found that these improvements in diagnostic accuracy resulted in an average cost-savings of $1,303 and 0.007 quality-adjusted-life-years per patient. Typical AFl and AF are frequently misdiagnosed using visual criteria. Quantitative analysis improves diagnostic accuracy and results in improved healthcare costs and patient outcomes. © 2010 Wiley Periodicals, Inc.

  20. Interactive Exposure With a Blood Glucose Monitor With a Novel Glucose Color Range Indicator Is Associated With Improved Glucose Range Interpretation and Awareness in Patients With Type 2 Diabetes.

    PubMed

    Grady, Mike; Warren, Graham; Levy, Brian L; Katz, Laurence B

    2015-07-01

    The ability of patients to achieve glycemic control depends in part on their ability to interpret and act on blood glucose (BG) results. This clinical study was conducted to determine if a simple on-meter color range indicator (CRI) could improve the ability of patients to categorize BG values into low, in-range, and high glycemic ranges. The clinical study was conducted in 59 subjects with type 2 diabetes (T2DM). Subjects classified 50 general, 15 before- and 15 after-meal BG values as low, in-range, or high based on their current knowledge. Subjects then interactively experienced the on-meter CRI, which showed whether alternate BG values were low, in-range, or high. After CRI interaction, subjects repeated the original scoring assessment followed by a survey exploring their awareness of glucose ranges. Following interaction with the CRI, subjects improved their ability to categorize general, before-meal and after-meal BG results by 23.4% ± 3.0% (SEM), 14.2% ± 2.4%, and 16.1% ± 2.9%, respectively (all P < .001), into low, in-range, and high glycemic ranges. Improvement was not accompanied by an increase in time spent categorizing results. There was no correlation between subject HbA1c, test frequency, or duration of diabetes and ability to correctly classify results. Subjects agreed the CRI feature helped them easily interpret glucose values and improved their awareness of glucose ranges. A short interactive session with a meter including a CRI feature improved the ability of T2DM subjects to interpret and categorize BG values into recommended ranges. © 2015 Diabetes Technology Society.

  1. eeDAP: An Evaluation Environment for Digital and Analog Pathology

    PubMed Central

    Gallas, Brandon D.; Cheng, Wei-Chung; Gavrielides, Marios A.; Ivansky, Adam; Keay, Tyler; Wunderlich, Adam; Hipp, Jason; Hewitt, Stephen M.

    2017-01-01

    Purpose The purpose of this work is to present a platform for designing and executing studies that compare pathologists interpreting histopathology of whole slide images (WSI) on a computer display to pathologists interpreting glass slides on an optical microscope. Methods Here we present eeDAP, an evaluation environment for digital and analog pathology. The key element in eeDAP is the registration of the WSI to the glass slide. Registration is accomplished through computer control of the microscope stage and a camera mounted on the microscope that acquires images of the real time microscope view. Registration allows for the evaluation of the same regions of interest (ROIs) in both domains. This can reduce or eliminate disagreements that arise from pathologists interpreting different areas and focuses the comparison on image quality. Results We reduced the pathologist interpretation area from an entire glass slide (≈10–30 mm)2 to small ROIs <(50 um)2. We also made possible the evaluation of individual cells. Conclusions We summarize eeDAP’s software and hardware and provide calculations and corresponding images of the microscope field of view and the ROIs extracted from the WSIs. These calculations help provide a sense of eeDAP’s functionality and operating principles, while the images provide a sense of the look and feel of studies that can be conducted in the digital and analog domains. The eeDAP software can be downloaded from code.google.com (project: eeDAP) as Matlab source or as a precompiled stand-alone license-free application. PMID:28845079

  2. The methods of optical physics as a mean of the objects’ molecular structure identification (on the base of the research of dophamine and adrenaline molecules)

    NASA Astrophysics Data System (ADS)

    Elkin, M. D.; Alykova, O. M.; Smirnov, V. V.; Stefanova, G. P.

    2017-01-01

    Structural and dynamic models of dopamine and adrenaline are proposed on the basis of ab initio quantum calculations of the geometric and electronic structure. The parameters of the adiabatic potential are determined, a vibrational states interpretation of the test compound is proposed in this work. The analysis of the molecules conformational structure of the substance is made. A method for calculating the shifts of vibrational excitation frequencies in 1,2,4-threesubstituted of benzole is presented. It is based on second order perturbation theory. A choice of method and basis for calculation of a fundamental vibrations frequencies and intensities of the bands in the IR and Raman spectra is justified. The technique for evaluation of anharmonicity with cubic and quartic force constants is described. The paper presents the results of numerical experiments, geometric parameters of molecules, such as the valence bond lengths and angles between them. We obtain the frequency of the vibrational states and values of their integrated intensities. The interpretation of vibration of conformers is given. The results are in good agreement with experimental values. Proposed frequency can be used to identify the compounds of the vibrational spectra of molecules. The calculation was performed quantum density functional method DFT/B3LYP. It is shown that this method can be used to modeling the geometrical parameters molecular and electronic structure of various substituted of benzole. It allows us to construct the structural-dynamic models of this class of compounds by numerical calculations.

  3. Neurolinguistic Programming: Add It To Your Tool Chest of Interpretive Techniques.

    ERIC Educational Resources Information Center

    Parratt, Smitty

    1997-01-01

    Highlights the importance of using verbal and nonverbal neurolinguistic programming to maximize the potential of interactions between interpreters and the general public and to improve long-term interactions. Discusses the power of mirroring and representational systems. Contains 29 references. (JRH)

  4. The Impact of Patient Language Proficiency and Interpreter Service Use on the Quality of Psychiatric Care: A Systematic Review

    PubMed Central

    Bauer, Amy M.; Alegría, Margarita

    2010-01-01

    Objective To determine the effects of limited English proficiency and use of interpreters on the quality of psychiatric care. Methods A systematic literature search for English-language publications was conducted in PubMed, PsycInfo, and CINAHL and by review of the reference lists of included articles and expert sources. Of 321 citations, 26 peer-reviewed articles met inclusion criteria by reporting primary data on the clinical care for psychiatric disorders among patients with limited proficiency in English or in the providers’ language. Results Little systematic research has addressed the impact of language proficiency or interpreter use on the quality of psychiatric care in contemporary US settings. Therefore, the literature to date is insufficient to inform evidence-based guidelines for improving quality of care among patients with limited English proficiency. Nonetheless, evaluation in a patient’s non-primary language can lead to incomplete or distorted mental status assessment whereas assessments conducted via untrained interpreters may contain interpreting errors. Consequences of interpreter errors include clinicians’ failure to identify disordered thought or delusional content. Use of professional interpreters may improve disclosure and attenuate some difficulties. Diagnostic agreement, collaborative treatment planning, and referral for specialty care may be compromised. Conclusions Clinicians should become aware of the types of quality problems that may occur when evaluating patients in a non-primary language or via an interpreter. Given demographic trends in the US, future research should aim to address the deficit in the evidence base to guide clinical practice and policy. PMID:20675834

  5. Factors influencing responsiveness to feedback: on the interplay between fear, confidence, and reasoning processes.

    PubMed

    Eva, Kevin W; Armson, Heather; Holmboe, Eric; Lockyer, Jocelyn; Loney, Elaine; Mann, Karen; Sargeant, Joan

    2012-03-01

    Self-appraisal has repeatedly been shown to be inadequate as a mechanism for performance improvement. This has placed greater emphasis on understanding the processes through which self-perception and external feedback interact to influence professional development. As feedback is inevitably interpreted through the lens of one's self-perceptions it is important to understand how learners interpret, accept, and use feedback (or not) and the factors that influence those interpretations. 134 participants from 8 health professional training/continuing competence programs were recruited to participate in focus groups. Analyses were designed to (a) elicit understandings of the processes used by learners and physicians to interpret, accept and use (or not) data to inform their perceptions of their clinical performance, and (b) further understand the factors (internal and external) believed to influence interpretation of feedback. Multiple influences appear to impact upon the interpretation and uptake of feedback. These include confidence, experience, and fear of not appearing knowledgeable. Importantly, however, each could have a paradoxical effect of both increasing and decreasing receptivity. Less prevalent but nonetheless important themes suggested mechanisms through which cognitive reasoning processes might impede growth from formative feedback. Many studies have examined the effectiveness of feedback through variable interventions focused on feedback delivery. This study suggests that it is equally important to consider feedback from the perspective of how it is received. The interplay observed between fear, confidence, and reasoning processes reinforces the notion that there is no simple recipe for the delivery of effective feedback. These factors should be taken into account when trying to understand (a) why self-appraisal can be flawed, (b) why appropriate external feedback is vital (yet can be ineffective), and (c) why we may need to disentangle the goals of performance improvement from the goals of improving self-assessment.

  6. Ionizing radiation calculations and comparisons with LDEF data

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.; Watts, J. W., Jr.

    1992-01-01

    In conjunction with the analysis of LDEF ionizing radiation dosimetry data, a calculational program is in progress to aid in data interpretation and to assess the accuracy of current radiation models for future mission applications. To estimate the ionizing radiation environment at the LDEF dosimeter locations, scoping calculations for a simplified (one dimensional) LDEF mass model were made of the primary and secondary radiations produced as a function of shielding thickness due to trapped proton, galactic proton, and atmospheric (neutron and proton cosmic ray albedo) exposures. Preliminary comparisons of predictions with LDEF induced radioactivity and dose measurements were made to test a recently developed model of trapped proton anisotropy.

  7. Density functional calculations of the Mössbauer parameters in hexagonal ferrite SrFe12O19

    NASA Astrophysics Data System (ADS)

    Ikeno, Hidekazu

    2018-03-01

    Mössbauer parameters in a magnetoplumbite-type hexagonal ferrite, SrFe12O19, are computed using the all-electron band structure calculation based on the density functional theory. The theoretical isomer shift and quadrupole splitting are consistent with experimentally obtained values. The absolute values of hyperfine splitting parameters are found to be underestimated, but the relative scale can be reproduced. The present results validate the site-dependence of Mössbauer parameters obtained by analyzing experimental spectra of hexagonal ferrites. The results also show the usefulness of theoretical calculations for increasing the reliability of interpretation of the Mössbauer spectra.

  8. Lithium cluster anions: photoelectron spectroscopy and ab initio calculations.

    PubMed

    Alexandrova, Anastassia N; Boldyrev, Alexander I; Li, Xiang; Sarkas, Harry W; Hendricks, Jay H; Arnold, Susan T; Bowen, Kit H

    2011-01-28

    Structural and energetic properties of small, deceptively simple anionic clusters of lithium, Li(n)(-), n = 3-7, were determined using a combination of anion photoelectron spectroscopy and ab initio calculations. The most stable isomers of each of these anions, the ones most likely to contribute to the photoelectron spectra, were found using the gradient embedded genetic algorithm program. Subsequently, state-of-the-art ab initio techniques, including time-dependent density functional theory, coupled cluster, and multireference configurational interactions methods, were employed to interpret the experimental spectra.

  9. NGPA disputes plague operators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stremel, K.

    1984-03-01

    Conflicting interpretations of the Natural Gas Policy Act of 1978 have natural gas producers in a costly financial dilemma. A recent circuit court of appeals decision allows for a different method for the calculation of Btu values for gas. This dry method for Btu calculation gives a lower cost for natural gas and therefore cost pipelines an estimated one billion dollars in over payments. The court has declared that their decision is retroactive and that producers must pay the debt. Discussions from both sides are presented.

  10. Hydrazine vapor detonations

    NASA Technical Reports Server (NTRS)

    Pedley, M. D.; Bishop, C. V.; Benz, F. J.; Bennett, C. A.; Mcclenagan, R. D.

    1988-01-01

    The detonation velocity and cell widths for hydrazine decomposition were measured over a wide range of temperatures and pressures. The detonation velocity in pure hydrazine was within 5 percent of the calculated C-J velocity. The detonation cell width measurements were interpreted using the Zeldovich-Doering-von Neumann model with a detailed reaction mechanism for hydrazine decomposition. Excellent agreement with experimental data for pure hydrazine was obtained using the empirical relation that detonation cell width was equal to 29 times the kinetically calculated reaction zone length.

  11. Tolerance requirements to prevent fluid leakage in the crucible/plunger MEA experiment MPS 770030

    NASA Technical Reports Server (NTRS)

    Rathz, T. J.

    1982-01-01

    Molten Al-In leaked unexpectedly out of the crucible of a proposed MEA materials processing in space experiment. The molten metals use a spring loaded plunger to eliminate most free surfaces. The critical criteria necessary to initiate flow and the rate of fluid flow into the crucible/plunger annulus is calculated. Experimental in situ X-radiographs are interpreted according to the calculations. A note on possible effects of capillary flow if wetting occurs between crucible/plunger and liquids is included.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalugin, A. V., E-mail: Kalugin-AV@nrcki.ru; Tebin, V. V.

    The specific features of calculation of the effective multiplication factor using the Monte Carlo method for weakly coupled and non-asymptotic multiplying systems are discussed. Particular examples are considered and practical recommendations on detection and Monte Carlo calculation of systems typical in numerical substantiation of nuclear safety for VVER fuel management problems are given. In particular, the problems of the choice of parameters for the batch mode and the method for normalization of the neutron batch, as well as finding and interpretation of the eigenvalue spectrum for the integral fission matrix, are discussed.

  13. Validity of the EQ-5D-5L and reference norms for the Spanish population.

    PubMed

    Hernandez, Gimena; Garin, Olatz; Pardo, Yolanda; Vilagut, Gemma; Pont, Àngels; Suárez, Mónica; Neira, Montse; Rajmil, Luís; Gorostiza, Inigo; Ramallo-Fariña, Yolanda; Cabases, Juan; Alonso, Jordi; Ferrer, Montse

    2018-05-16

    The EuroQol 5 dimensions 5 levels (EQ-5D-5L) is the new version of EQ-5D, developed to improve its discriminatory capacity. This study aims to evaluate the construct validity of the Spanish version and provide index and dimension population-based reference norms for the new EQ-5D-5L. Data were obtained from the 2011/2012 Spanish National Health Survey, with a representative sample (n = 20,587) of non-institutionalized Spanish adults (≥ 18 years). The EQ-5D-5L index was calculated by using the Spanish value set. Construct validity was evaluated by comparing known groups with estimators obtained through regression models, adjusted by age and gender. Sampling weights were applied to restore the representativeness of the sample and to calculate the norms stratified by gender and age groups. We calculated the percentages and standard errors of dimensions, and the deciles, percentiles 5 and 95, means, and 95% confidence intervals of the health index. All the hypotheses established a priori for known groups were confirmed (P < 0.001). The EQ-5D-5L index indicated worse health in groups with lower education level (from 0.94 to 0.87), higher number of chronic conditions (0.96-0.79), probable psychiatric disorder (0.94 vs 0.80), strong limitations (0.96-0.46), higher number of days of restriction (0.93-0.64) or confinement to bed (0.92-0.49), and hospitalized in the previous 12 months (0.92 vs 0.81). The EQ-5D-5L is a valid instrument to measure perceived health in the Spanish-speaking population. The representative population-based norms provided here will help improve the interpretation of results obtained with the new EQ-5D-5L.

  14. Validity of a traffic air pollutant dispersion model to assess exposure to fine particles.

    PubMed

    Kostrzewa, Aude; Reungoat, Patrice; Raherison, Chantal

    2009-08-01

    Fine particles (PM(2.5)) are an important component of air pollution. Epidemiological studies have shown health effects due to ambient air particles, particularly allergies in children. Since the main difficulty is to determine exposure to such pollution, traffic air pollutant (TAP) dispersions models have been developed to improve the estimation of individual exposure levels. One such model, the ExTra index, has been validated for nitrogen oxide concentrations but not for other pollutants. The purpose of this study was to assess the validity of the ExTra index to assess PM(2.5) exposure. We compared PM(2.5) concentrations calculated by the ExTra index to reference measures (passive samplers situated under the covered part of the playground), in 15 schools in Bordeaux, in 2000. First, we collected the input data required by the ExTra index: background and local pollution depending on traffic, meteorology and topography. Second, the ExTra index was calculated for each school. Statistical analysis consisted of a graphic description; then, we calculated an intraclass correlation coefficient. Concentrations calculated with the ExTra index and the reference method were similar. The ExTra index underestimated exposure by 2.2 microg m(-3) on average compared to the reference method. The intraclass correlation coefficient was 0.85 and its 95% confidence interval was [0.62; 0.95]. The results suggest that the ExTra index provides an assessment of PM(2.5) exposure similar to that of the reference method. Although caution is required in interpreting these results owing to the small number of sites, the ExTra index could be a useful epidemiological tool for reconstructing individual exposure, an important challenge in epidemiology.

  15. Ultrafast dynamics and decoherence of quasiparticles in surface bands: Development of the formalism

    NASA Astrophysics Data System (ADS)

    Gumhalter, Branko

    2005-10-01

    We describe a formalism suitable for studying the ultrafast dynamics and nonadiabatic effects associated with propagation of a single electron injected into an empty band. Within the band the electron is coupled to vibrational or electronic excitations that can be modeled by bosons. The formalism is based on the application of cumulant expansion to calculations of diagonal single particle propagators that are used in the interpretations of time resolved measurements of the surface electronic structure. Second and fourth order cumulants which arise from linear coupling to bosonic excitations and give leading contributions to the renormalization of propagators are explicitly calculated in the real time domain and their properties analyzed. This approach enables the assessment of transient effects and energy transfer associated with nonadiabatic response of the system to promotion of electrons into unoccupied bands, as well as of higher order corrections to the lifetimes and energy shifts of the initial electronic states that in the adiabatic regime are obtained from Fermi’s golden rule approach or its improvements such as the GW approximation. In the form presented the formalism is particularly suitable for studying the non-Markovian evolution and ultrafast decoherence of electronic states encountered in electron spectroscopies of quasi-two-dimensional bands on metal surfaces whose descriptions are inaccessible to the approaches based on the adiabatic hypothesis. The fast convergence of the results obtained by this procedure is demonstrated for a simple model system relevant to surface problems. On the basis of this and some general properties of cumulants it is argued that in the majority of surface problems involving electron-boson interactions the ultrafast dynamics of quasiparticles is accurately described by the second order cumulant, which can be calculated with the effort not exceeding those encountered in the standard GW approximation calculations.

  16. Spectroscopic and chemical reactivity analysis of D-Myo-Inositol using quantum chemical approach and its experimental verification

    NASA Astrophysics Data System (ADS)

    Mishra, Devendra P.; Srivastava, Anchal; Shukla, R. K.

    2017-07-01

    This paper describes the spectroscopic (^1H and ^{13}C NMR, FT-IR and UV-Visible), chemical, nonlinear optical and thermodynamic properties of D-Myo-Inositol using quantum chemical technique and its experimental verification. The structural parameters of the compound are determined from the optimized geometry by B3LYP method with 6 {-}311{+}{+}G(d,p) basis set. It was found that the optimized parameters thus obtained are almost in agreement with the experimental ones. A detailed interpretation of the infrared spectra of D-Myo-Inositol is also reported in the present work. After optimization, the proton and carbon NMR chemical shifts of the studied compound are calculated using GIAO and 6 {-}311{+}{+}G(d,p) basis set. The search of organic materials with improved charge transfer properties requires precise quantum chemical calculations of space-charge density distribution, state and transition dipole moments and HOMO-LUMO states. The nature of the transitions in the observed UV-Visible spectrum of the compound has been studied by the time-dependent density functional theory (TD-DFT). The global reactivity descriptors like chemical potential, electronegativity, hardness, softness and electrophilicity index, have been calculated using DFT. The thermodynamic calculation related to the title compound was also performed at B3LYP/ 6 {-}311{+}{+}G(d,p) level of theory. The standard statistical thermodynamic functions like heat capacity at constant pressure, entropy and enthalpy change were obtained from the theoretical harmonic frequencies of the optimized molecule. It is observed that the values of heat capacity, entropy and enthalpy increase with increase in temperature from 100 to 1000 K, which is attributed to the enhancement of molecular vibration with the increase in temperature.

  17. Puzzle based teaching versus traditional instruction in electrocardiogram interpretation for medical students – a pilot study

    PubMed Central

    Rubinstein, Jack; Dhoble, Abhijeet; Ferenchick, Gary

    2009-01-01

    Background Most medical professionals are expected to possess basic electrocardiogram (EKG) interpretation skills. But, published data suggests that residents' and physicians' EKG interpretation skills are suboptimal. Learning styles differ among medical students; individualization of teaching methods has been shown to be viable and may result in improved learning. Puzzles have been shown to facilitate learning in a relaxed environment. The objective of this study was to assess efficacy of teaching puzzle in EKG interpretation skills among medical students. Methods This is a reader blinded crossover trial. Third year medical students from College of Human Medicine, Michigan State University participated in this study. Two groups (n = 9) received two traditional EKG interpretation skills lectures followed by a standardized exam and two extra sessions with the teaching puzzle and a different exam. Two other groups (n = 6) received identical courses and exams with the puzzle session first followed by the traditional teaching. EKG interpretation scores on final test were used as main outcome measure. Results The average score after only traditional teaching was 4.07 ± 2.08 while after only the puzzle session was 4.04 ± 2.36 (p = 0.97). The average improvement after the traditional session was followed up with a puzzle session was 2.53 ± 1.94 while the average improvement after the puzzle session was followed with the traditional session was 2.08 ± 1.73 (p = 0.67). The final EKG exam score for this cohort (n = 15) was 84.1 compared to 86.6 (p = 0.22) for a comparable sample of medical students (n = 15) at a different campus. Conclusion Teaching EKG interpretation with puzzles is comparable to traditional teaching and may be particularly useful for certain subgroups of students. Puzzle session are more interactive and relaxing, and warrant further investigations on larger scale. PMID:19144134

  18. 3D-QSAR based on quantum-chemical molecular fields: toward an improved description of halogen interactions.

    PubMed

    Güssregen, Stefan; Matter, Hans; Hessler, Gerhard; Müller, Marco; Schmidt, Friedemann; Clark, Timothy

    2012-09-24

    Current 3D-QSAR methods such as CoMFA or CoMSIA make use of classical force-field approaches for calculating molecular fields. Thus, they can not adequately account for noncovalent interactions involving halogen atoms like halogen bonds or halogen-π interactions. These deficiencies in the underlying force fields result from the lack of treatment of the anisotropy of the electron density distribution of those atoms, known as the "σ-hole", although recent developments have begun to take specific interactions such as halogen bonding into account. We have now replaced classical force field derived molecular fields by local properties such as the local ionization energy, local electron affinity, or local polarizability, calculated using quantum-mechanical (QM) techniques that do not suffer from the above limitation for 3D-QSAR. We first investigate the characteristics of QM-based local property fields to show that they are suitable for statistical analyses after suitable pretreatment. We then analyze these property fields with partial least-squares (PLS) regression to predict biological affinities of two data sets comprising factor Xa and GABA-A/benzodiazepine receptor ligands. While the resulting models perform equally well or even slightly better in terms of consistency and predictivity than the classical CoMFA fields, the most important aspect of these augmented field-types is that the chemical interpretation of resulting QM-based property field models reveals unique SAR trends driven by electrostatic and polarizability effects, which cannot be extracted directly from CoMFA electrostatic maps. Within the factor Xa set, the interaction of chlorine and bromine atoms with a tyrosine side chain in the protease S1 pocket are correctly predicted. Within the GABA-A/benzodiazepine ligand data set, PLS models of high predictivity resulted for our QM-based property fields, providing novel insights into key features of the SAR for two receptor subtypes and cross-receptor selectivity of the ligands. The detailed interpretation of regression models derived using improved QM-derived property fields thus provides a significant advantage by revealing chemically meaningful correlations with biological activity and helps in understanding novel structure-activity relationship features. This will allow such knowledge to be used to design novel molecules on the basis of interactions additional to steric and hydrogen-bonding features.

  19. Rotational Dynamics of Proteins from Spin Relaxation Times and Molecular Dynamics Simulations.

    PubMed

    Ollila, O H Samuli; Heikkinen, Harri A; Iwaï, Hideo

    2018-06-14

    Conformational fluctuations and rotational tumbling of proteins can be experimentally accessed with nuclear spin relaxation experiments. However, interpretation of molecular dynamics from the experimental data is often complicated, especially for molecules with anisotropic shape. Here, we apply classical molecular dynamics simulations to interpret the conformational fluctuations and rotational tumbling of proteins with arbitrarily anisotropic shape. The direct calculation of spin relaxation times from simulation data did not reproduce the experimental data. This was successfully corrected by scaling the overall rotational diffusion coefficients around the protein inertia axes with a constant factor. The achieved good agreement with experiments allowed the interpretation of the internal and overall dynamics of proteins with significantly anisotropic shape. The overall rotational diffusion was found to be Brownian, having only a short subdiffusive region below 0.12 ns. The presented methodology can be applied to interpret rotational dynamics and conformation fluctuations of proteins with arbitrary anisotropic shape. However, a water model with more realistic dynamical properties is probably required for intrinsically disordered proteins.

  20. Linear regression analysis: part 14 of a series on evaluation of scientific publications.

    PubMed

    Schneider, Astrid; Hommel, Gerhard; Blettner, Maria

    2010-11-01

    Regression analysis is an important statistical method for the analysis of medical data. It enables the identification and characterization of relationships among multiple factors. It also enables the identification of prognostically relevant risk factors and the calculation of risk scores for individual prognostication. This article is based on selected textbooks of statistics, a selective review of the literature, and our own experience. After a brief introduction of the uni- and multivariable regression models, illustrative examples are given to explain what the important considerations are before a regression analysis is performed, and how the results should be interpreted. The reader should then be able to judge whether the method has been used correctly and interpret the results appropriately. The performance and interpretation of linear regression analysis are subject to a variety of pitfalls, which are discussed here in detail. The reader is made aware of common errors of interpretation through practical examples. Both the opportunities for applying linear regression analysis and its limitations are presented.

  1. [Lymphoscintigrams with anatomical landmarks obtained with vector graphics].

    PubMed

    Rubini, Giuseppe; Antonica, Filippo; Renna, Maria Antonia; Ferrari, Cristina; Iuele, Francesca; Stabile Ianora, Antonio Amato; Losco, Matteo; Niccoli Asabella, Artor

    2012-11-01

    Nuclear medicine images are difficult to interpret because they do not include anatomical details. The aim of this study was to obtain lymphoscintigrams with anatomical landmarks that could be easily interpreted by General Physicians. Traditional lymphoscintigrams were processed with Adobe© Photoshop® CS6 and converted into vector images created by Illustrator®. The combination with a silhouette vector improved image interpretation, without resulting in longer radiation exposure or acquisition times.

  2. A Classification of Remote Sensing Image Based on Improved Compound Kernels of Svm

    NASA Astrophysics Data System (ADS)

    Zhao, Jianing; Gao, Wanlin; Liu, Zili; Mou, Guifen; Lu, Lin; Yu, Lina

    The accuracy of RS classification based on SVM which is developed from statistical learning theory is high under small number of train samples, which results in satisfaction of classification on RS using SVM methods. The traditional RS classification method combines visual interpretation with computer classification. The accuracy of the RS classification, however, is improved a lot based on SVM method, because it saves much labor and time which is used to interpret images and collect training samples. Kernel functions play an important part in the SVM algorithm. It uses improved compound kernel function and therefore has a higher accuracy of classification on RS images. Moreover, compound kernel improves the generalization and learning ability of the kernel.

  3. The Role of Self-reports and Behavioral Measures of Interpretation Biases in Children with Varying Levels of Anxiety.

    PubMed

    Klein, Anke M; Flokstra, Emmelie; van Niekerk, Rianne; Klein, Steven; Rapee, Ronald M; Hudson, Jennifer L; Bögels, Susan M; Becker, Eni S; Rinck, Mike

    2018-04-21

    We investigated the role of self-reports and behavioral measures of interpretation biases and their content-specificity in children with varying levels of spider fear and/or social anxiety. In total, 141 selected children from a community sample completed an interpretation bias task with scenarios that were related to either spider threat or social threat. Specific interpretation biases were found; only spider-related interpretation bias and self-reported spider fear predicted unique variance in avoidance behavior on the Behavior Avoidance Task for spiders. Likewise, only social-threat related interpretation bias and self-reported social anxiety predicted anxiety during the Social Speech Task. These findings support the hypothesis that fearful children display cognitive biases that are specific to particular fear-relevant stimuli. Clinically, this insight might be used to improve treatments for anxious children by targeting content-specific interpretation biases related to individual disorders.

  4. Insights on energy selective contacts for thermal energy harvesting using double resonant tunneling contacts and numerical modeling

    NASA Astrophysics Data System (ADS)

    Julian, A.; Jehl, Z.; Miyashita, N.; Okada, Y.; Guillemoles, J.-F.

    2016-12-01

    Energy selective electrical contacts have been proposed as a way to approach ultimate efficiencies both for thermoelectric and photovoltaic devices as they allow a reduction of the entropy production during the energy conversion process. A self-consistent numerical model based on the transfer matrix approach in the effective mass and envelope function approximation has been developed to calculate the electronic properties of double resonant tunneling barriers used as energy selective contacts in hot carrier solar cells. It is found that the application of an external electric bias significantly degrades the electronic transmission of the structure, and thus the tunneling current in the current-voltage characteristic. This is due to a symmetry breaking which can be offset using finely tuned asymmetric double resonant tunneling barriers, leading to a full recovery of the tunneling current in our model. Moreover, we model the heterostructure using electrons temperature in the emitter higher than that of the lattice, providing insights on the interpretation of experimental devices functioning in hot carrier conditions, especially regarding the previously reported shift of the resonance peak (negative differential resistance), which we interpret as related to a shift in the hot electron distribution while the maximum remains at the conduction band edge of the emitter. Finally, experimental results are presented using asymmetric structure showing significantly improved resonant properties at room temperature with very sharp negative differential resistance.

  5. Applying of the Artificial Neural Networks (ANN) to Identify and Characterize Sweet Spots in Shale Gas Formations

    NASA Astrophysics Data System (ADS)

    Puskarczyk, Edyta

    2018-03-01

    The main goal of the study was to enhance and improve information about the Ordovician and Silurian gas-saturated shale formations. Author focused on: firstly, identification of the shale gas formations, especially the sweet spots horizons, secondly, classification and thirdly, the accurate characterization of divisional intervals. Data set comprised of standard well logs from the selected well. Shale formations are represented mainly by claystones, siltstones, and mudstones. The formations are also partially rich in organic matter. During the calculations, information about lithology of stratigraphy weren't taken into account. In the analysis, selforganizing neural network - Kohonen Algorithm (ANN) was used for sweet spots identification. Different networks and different software were tested and the best network was used for application and interpretation. As a results of Kohonen networks, groups corresponding to the gas-bearing intervals were found. The analysis showed diversification between gas-bearing formations and surrounding beds. It is also shown that internal diversification in sweet spots is present. Kohonen algorithm was also used for geological interpretation of well log data and electrofacies prediction. Reliable characteristic into groups shows that Ja Mb and Sa Fm which are usually treated as potential sweet spots only partially have good reservoir conditions. It is concluded that ANN appears to be useful and quick tool for preliminary classification of members and sweet spots identification.

  6. Mechanical and optical response of [100] lithium fluoride to multi-megabar dynamic pressures

    DOE PAGES

    Davis, Jean -Paul; Knudson, Marcus D.; Shulenburger, Luke; ...

    2016-10-26

    An understanding of the mechanical and optical properties of lithium fluoride (LiF) is essential to its use as a transparent tamper and window for dynamic materials experiments. In order to improve models for this material, we applied iterative Lagrangian analysis to ten independent sets of data from magnetically driven planar shockless compression experiments on single crystal [100] LiF to pressures as high as 350 GPa. We found that the compression response disagreed with a prevalent tabular equation of state for LiF that is commonly used to interpret shockless compression experiments. We also present complementary data from ab initio calculations performedmore » using the diffusion quantum Monte Carlo method. The agreement between these two data sets lends confidence to our interpretation. In order to aid in future experimental analysis, we have modified the tabular equation of state to match the new data. We have also extended knowledge of the optical properties of LiF via shock-compression and shockless compression experiments, refining the transmissibility limit, measuring the refractive index to ~300 GPa, and confirming the nonlinear dependence of the refractive index on density. Lastly, we present a new model for the refractive index of LiF that includes temperature dependence and describe a procedure for correcting apparent velocity to true velocity for dynamic compression experiments.« less

  7. Performance evaluation of the multiple root node approach to the Rete pattern matcher for production systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sohn, A.; Gaudiot, J.-L.

    1991-12-31

    Much effort has been expanded on special architectures and algorithms dedicated to efficient processing of the pattern matching step of production systems. In this paper, the authors investigate the possible improvement on the Rete pattern matcher for production systems. Inefficiencies in the Rete match algorithm have been identified, based on which they introduce a pattern matcher with multiple root nodes. A complete implementation of the multiple root node-based production system interpreter is presented to investigate its relative algorithmic behavior over the Rete-based Ops5 production system interpreter. Benchmark production system programs are executed (not simulated) on a sequential machine Sun 4/490more » by using both interpreters and various experimental results are presented. Their investigation indicates that the multiple root node-based production system interpreter would give a maximum of up to 6-fold improvement over the Lisp implementation of the Rete-based Ops5 for the match step.« less

  8. Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powell, Danny H; Elwood Jr, Robert H

    2011-01-01

    Analysis of the material protection, control, and accountability (MPC&A) system is necessary to understand the limits and vulnerabilities of the system to internal threats. A self-appraisal helps the facility be prepared to respond to internal threats and reduce the risk of theft or diversion of nuclear material. The material control and accountability (MC&A) system effectiveness tool (MSET) fault tree was developed to depict the failure of the MPC&A system as a result of poor practices and random failures in the MC&A system. It can also be employed as a basis for assessing deliberate threats against a facility. MSET uses faultmore » tree analysis, which is a top-down approach to examining system failure. The analysis starts with identifying a potential undesirable event called a 'top event' and then determining the ways it can occur (e.g., 'Fail To Maintain Nuclear Materials Under The Purview Of The MC&A System'). The analysis proceeds by determining how the top event can be caused by individual or combined lower level faults or failures. These faults, which are the causes of the top event, are 'connected' through logic gates. The MSET model uses AND-gates and OR-gates and propagates the effect of event failure using Boolean algebra. To enable the fault tree analysis calculations, the basic events in the fault tree are populated with probability risk values derived by conversion of questionnaire data to numeric values. The basic events are treated as independent variables. This assumption affects the Boolean algebraic calculations used to calculate results. All the necessary calculations are built into the fault tree codes, but it is often useful to estimate the probabilities manually as a check on code functioning. The probability of failure of a given basic event is the probability that the basic event primary question fails to meet the performance metric for that question. The failure probability is related to how well the facility performs the task identified in that basic event over time (not just one performance or exercise). Fault tree calculations provide a failure probability for the top event in the fault tree. The basic fault tree calculations establish a baseline relative risk value for the system. This probability depicts relative risk, not absolute risk. Subsequent calculations are made to evaluate the change in relative risk that would occur if system performance is improved or degraded. During the development effort of MSET, the fault tree analysis program used was SAPHIRE. SAPHIRE is an acronym for 'Systems Analysis Programs for Hands-on Integrated Reliability Evaluations.' Version 1 of the SAPHIRE code was sponsored by the Nuclear Regulatory Commission in 1987 as an innovative way to draw, edit, and analyze graphical fault trees primarily for safe operation of nuclear power reactors. When the fault tree calculations are performed, the fault tree analysis program will produce several reports that can be used to analyze the MPC&A system. SAPHIRE produces reports showing risk importance factors for all basic events in the operational MC&A system. The risk importance information is used to examine the potential impacts when performance of certain basic events increases or decreases. The initial results produced by the SAPHIRE program are considered relative risk values. None of the results can be interpreted as absolute risk values since the basic event probability values represent estimates of risk associated with the performance of MPC&A tasks throughout the material balance area (MBA). The RRR for a basic event represents the decrease in total system risk that would result from improvement of that one event to a perfect performance level. Improvement of the basic event with the greatest RRR value produces a greater decrease in total system risk than improvement of any other basic event. Basic events with the greatest potential for system risk reduction are assigned performance improvement values, and new fault tree calculations show the improvement in total system risk. The operational impact or cost-effectiveness from implementing the performance improvements can then be evaluated. The improvements being evaluated can be system performance improvements, or they can be potential, or actual, upgrades to the system. The RIR for a basic event represents the increase in total system risk that would result from failure of that one event. Failure of the basic event with the greatest RIR value produces a greater increase in total system risk than failure of any other basic event. Basic events with the greatest potential for system risk increase are assigned failure performance values, and new fault tree calculations show the increase in total system risk. This evaluation shows the importance of preventing performance degradation of the basic events. SAPHIRE identifies combinations of basic events where concurrent failure of the events results in failure of the top event.« less

  9. Figure facts: encouraging undergraduates to take a data-centered approach to reading primary literature.

    PubMed

    Round, Jennifer E; Campbell, A Malcolm

    2013-01-01

    The ability to interpret experimental data is essential to understanding and participating in the process of scientific discovery. Reading primary research articles can be a frustrating experience for undergraduate biology students because they have very little experience interpreting data. To enhance their data interpretation skills, students used a template called "Figure Facts" to assist them with primary literature-based reading assignments in an advanced cellular neuroscience course. The Figure Facts template encourages students to adopt a data-centric approach, rather than a text-based approach, to understand research articles. Specifically, Figure Facts requires students to focus on the experimental data presented in each figure and identify specific conclusions that may be drawn from those results. Students who used Figure Facts for one semester increased the amount of time they spent examining figures in a primary research article, and regular exposure to primary literature was associated with improved student performance on a data interpretation skills test. Students reported decreased frustration associated with interpreting data figures, and their opinions of the Figure Facts template were overwhelmingly positive. In this paper, we present Figure Facts for others to adopt and adapt, with reflection on its implementation and effectiveness in improving undergraduate science education.

  10. Hablamos Juntos (Together We Speak): Interpreters, Provider Communication, and Satisfaction with Care

    PubMed Central

    Morales, Leo S.

    2010-01-01

    BACKGROUND The Hablamos Juntos—Together We Speak (HJ)—national demonstration project targeted the improvement of language access for Spanish-speaking Latinos in areas with rapidly growing Latino populations. The objective of HJ was to improve doctor-patient communication by increasing access to and quality of interpreter services for Spanish-speaking patients. OBJECTIVE To investigate how access to interpreters for adult Spanish-speaking Latinos is associated with ratings of doctor/office staff communication and satisfaction with care. DESIGN Cross-sectional cohort study. PATIENTS A total of 1,590 Spanish-speaking Latino adults from eight sites across the United States who participated in the outpatient HJ evaluation. MEASUREMENTS We analyzed two multi-item measures of doctor communication (4 items) and office staff helpfulness (2 items), and one global item of satisfaction with care by interpreter use. We performed regression analyses to control for patient sociodemographic characteristics, survey year, and clustering at the site of care. RESULTS Ninety-five percent of participants were born outside the US, 81% were females, and survey response rates ranged from 45% to 85% across sites. In this cohort of Spanish-speaking patients, those who needed and always used interpreters reported better experiences with care than their counterparts who needed but had interpreters unavailable. Patients who always used an interpreter had better adjusted ratings of doctor communication [effect size (ES = 0.51)], office staff helpfulness (ES = 0.37), and satisfaction with care (ES = 0.37) than patients who needed but did not always use an interpreter. Patients who needed and always used interpreters also reported better experiences with care in all three domains measured [doctor communication (ES = 0.30), office staff helpfulness (ES = 0.21), and satisfaction with care (ES = 0.23)] than patients who did not need interpreters. CONCLUSIONS Among adult Spanish-speaking Latinos, interpreter use is independently associated with higher satisfaction with doctor communication, office staff helpfulness, and ambulatory care. Increased attention to the need for effective interpreter services is warranted in areas with rapidly growing Spanish-speaking populations. PMID:20703951

  11. Software user's guide for determining the Pennsylvania scour critical indicator code and streambed scour assessment rating for roadway bridges

    USGS Publications Warehouse

    Henneberg, M.F.; Strause, J.L.

    2002-01-01

    This report presents the instructions required to use the Scour Critical Bridge Indicator (SCBI) Code and Scour Assessment Rating (SAR) calculator developed by the Pennsylvania Department of Transportation (PennDOT) and the U.S. Geological Survey to identify Pennsylvania bridges with excessive scour conditions or a high potential for scour. Use of the calculator will enable PennDOT bridge personnel to quickly calculate these scour indices if site conditions change, new bridges are constructed, or new information needs to be included. Both indices are calculated for a bridge simultaneously because they must be used together to be interpreted accurately. The SCBI Code and SAR calculator program is run by a World Wide Web browser from a remote computer. The user can 1) add additional scenarios for bridges in the SCBI Code and SAR calculator database or 2) enter data for new bridges and run the program to calculate the SCBI Code and calculate the SAR. The calculator program allows the user to print the results and to save multiple scenarios for a bridge.

  12. A computer-human interaction model to improve the diagnostic accuracy and clinical decision-making during 12-lead electrocardiogram interpretation.

    PubMed

    Cairns, Andrew W; Bond, Raymond R; Finlay, Dewar D; Breen, Cathal; Guldenring, Daniel; Gaffney, Robert; Gallagher, Anthony G; Peace, Aaron J; Henn, Pat

    2016-12-01

    The 12-lead Electrocardiogram (ECG) presents a plethora of information and demands extensive knowledge and a high cognitive workload to interpret. Whilst the ECG is an important clinical tool, it is frequently incorrectly interpreted. Even expert clinicians are known to impulsively provide a diagnosis based on their first impression and often miss co-abnormalities. Given it is widely reported that there is a lack of competency in ECG interpretation, it is imperative to optimise the interpretation process. Predominantly the ECG interpretation process remains a paper based approach and whilst computer algorithms are used to assist interpreters by providing printed computerised diagnoses, there are a lack of interactive human-computer interfaces to guide and assist the interpreter. An interactive computing system was developed to guide the decision making process of a clinician when interpreting the ECG. The system decomposes the interpretation process into a series of interactive sub-tasks and encourages the clinician to systematically interpret the ECG. We have named this model 'Interactive Progressive based Interpretation' (IPI) as the user cannot 'progress' unless they complete each sub-task. Using this model, the ECG is segmented into five parts and presented over five user interfaces (1: Rhythm interpretation, 2: Interpretation of the P-wave morphology, 3: Limb lead interpretation, 4: QRS morphology interpretation with chest lead and rhythm strip presentation and 5: Final review of 12-lead ECG). The IPI model was implemented using emerging web technologies (i.e. HTML5, CSS3, AJAX, PHP and MySQL). It was hypothesised that this system would reduce the number of interpretation errors and increase diagnostic accuracy in ECG interpreters. To test this, we compared the diagnostic accuracy of clinicians when they used the standard approach (control cohort) with clinicians who interpreted the same ECGs using the IPI approach (IPI cohort). For the control cohort, the (mean; standard deviation; confidence interval) of the ECG interpretation accuracy was (45.45%; SD=18.1%; CI=42.07, 48.83). The mean ECG interpretation accuracy rate for the IPI cohort was 58.85% (SD=42.4%; CI=49.12, 68.58), which indicates a positive mean difference of 13.4%. (CI=4.45, 22.35) An N-1 Chi-square test of independence indicated a 92% chance that the IPI cohort will have a higher accuracy rate. Interpreter self-rated confidence also increased between cohorts from a mean of 4.9/10 in the control cohort to 6.8/10 in the IPI cohort (p=0.06). Whilst the IPI cohort had greater diagnostic accuracy, the duration of ECG interpretation was six times longer when compared to the control cohort. We have developed a system that segments and presents the ECG across five graphical user interfaces. Results indicate that this approach improves diagnostic accuracy but with the expense of time, which is a valuable resource in medical practice. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Mineral texture based seismic properties of meta-sedimentary and meta-igneous rocks in the orogenic wedge of the Central Scandinavian Caledonides

    NASA Astrophysics Data System (ADS)

    Almqvist, B. S. G.; Czaplinska, D.; Piazolo, S.

    2015-12-01

    Progress in seismic methods offers the possibility to visualize in ever greater detail the structure and composition of middle to lower continental crust. Ideally, the seismic parameters, including compressional (Vp) and shear (Vs) wave velocities, anisotropy and Vp/Vs-ratio, allow the inference of detailed and quantitative information on the deformation conditions, chemical composition, temperature and the amount and geometry of fluids and melts in the crust. However, such inferences regarding the crust should be calibrated with known mineral and rock physical properties. Seismic properties calculated from the crystallographic preferred orientation (CPO) and laboratory measurements on representative core material allow us to quantify the interpretations from seismic data. The challenge of such calibrations lies in the non-unique interpretation of seismic data. A large catalogue of physical rock properties is therefore useful, with as many constraining geophysical parameters as possible (including anisotropy and Vp/Vs ratio). We present new CPO data and modelled seismic properties for amphibolite and greenschist grade rocks representing the orogenic wedge in the Central Scandinavian Caledonides. Samples were collected from outcrops in the field and from a 2.5 km long drill core, which penetrated an amphibolite-grade allochthonous unit composed of meta-sedimentary and meta-igneous rocks, as well as mica and chlorite-rich mylonites. The textural data was acquired using large area electron backscatter diffraction (EBSD) maps, and the chemical composition of minerals obtained by energy dispersive x-ray (EDS). Based on the texture data, we compare and evaluate some of the existing methods to calculate texture-based seismic properties of rocks. The suite of samples consists of weakly anisotropic rocks such as felsic gneiss and calc-silicates, and more anisotropic amphibolite, metagabbro, mica-schist. The newly acquired dataset provides a range of seismic properties that improves compositional and structural characterization of deformed middle and lower crust.

  14. Phase transitions and dynamics of bulk and interfacial water.

    PubMed

    Franzese, G; Hernando-Martínez, A; Kumar, P; Mazza, M G; Stokely, K; Strekalova, E G; de los Santos, F; Stanley, H E

    2010-07-21

    New experiments on water at the surface of proteins at very low temperature display intriguing dynamic behaviors. The extreme conditions of these experiments make it difficult to explore the wide range of thermodynamic state points needed to offer a suitable interpretation. Detailed simulations suffer from the same problem, where equilibration times at low temperature become extremely long. We show how Monte Carlo simulations and mean field calculations using a tractable model of water help interpret the experimental results. Here we summarize the results for bulk water and investigate the thermodynamic and dynamic properties of supercooled water at an interface.

  15. The continuous UV flux of Alpha Lyrae - Non-LTE results

    NASA Technical Reports Server (NTRS)

    Snijders, M. A. J.

    1977-01-01

    Non-LTE calculations for the ultraviolet C I and Si I continuous opacity show that LTE results overestimate the importance of these sources of opacity and underestimate the emergent flux in Alpha Lyr. The largest errors occur between 1100 and 1160 A, where the predicted flux in non-LTE is as much as 50 times larger than in LTE, in reasonable accord with Copernicus observations. The discrepancy between LTE models and observations has been interpreted to result from the existence of a chromosphere. Until a self-consistent non-LTE model atmosphere becomes available, such an interpretation is premature.

  16. Can emergency department triage nurses appropriately utilize the Ottawa Knee Rules to order radiographs?-An implementation trial.

    PubMed

    Kec, Robert M; Richman, Peter B; Szucs, Paul A; Mandell, Mark; Eskin, Barnet

    2003-02-01

    To determine whether triage nurses can successfully interpret the Ottawa Knee Rule (OKR) and order knee radiographs according to the OKR. This was a prospective implementation trial of a clinical decision rule, set in a suburban, community emergency department (ED), evaluating a convenience sample of ED patients aged > 17 years with acute knee injuries. Patients were excluded for altered mental status, distracting injuries, and knee lacerations. Triage nurses and attending emergency physicians (EPs) were trained in appropriate use of the OKR. The triage nurses evaluated eligible patients and radiographs were ordered according to their interpretation of the OKR. EPs who were initially blinded to the triage assessments also evaluated the patients. EPs could add an x-ray order if, according to their assessment of the OKR, one was indicated and a radiograph had not been ordered by the nurse. Nurses and EPs recorded their blinded assessments on standardized data collection instruments. Kappa values were calculated to assess interobserver agreement (IOA) between nurses and EPs; sensitivity, specificity, negative predictive value (NPV), and positive predictive value (PPV) were calculated as appropriate. One hundred three patients were enrolled; 53% were female; 10 fractures were identified (9.7%). The IOAs between the nurses and EPs for each of the criteria were moderate to almost perfect: age-0.94; fibular head tenderness-0.4; isolated patellar tenderness-0.68; inability to bend knee to 90 degrees-0.73; inability to bear weight-0.76. The IOA was moderate (0.52) for the overall interpretation of the OKR by nurses and EPs. Sensitivity of nurse interpretation of the OKR for fracture was 70%, specificity 33%, NPV 91%, PPV 10%. Sensitivity of EP interpretation of the OKR for fracture was 100%, specificity 25%, NPV 100%, PPV 13%. Triage nurses showed fair to good ability to appropriately apply the OKR to pre-order knee radiographs.

  17. Presentation approaches for enhancing interpretability of patient-reported outcomes (PROs) in meta-analysis: a protocol for a systematic survey of Cochrane reviews.

    PubMed

    Devji, Tahira; Johnston, Bradley C; Patrick, Donald L; Bhandari, Mohit; Thabane, Lehana; Guyatt, Gordon H

    2017-09-27

    Meta-analyses of clinical trials often provide sufficient information for decision-makers to evaluate whether chance can explain apparent differences between interventions. Interpretation of the magnitude and importance of treatment effects beyond statistical significance can, however, be challenging, particularly for patient-reported outcomes (PROs) measured using questionnaires with which clinicians have limited familiarity. The objectives of our study are to systematically evaluate Cochrane systematic review authors' approaches to calculation, reporting and interpretation of pooled estimates of patient-reported outcome measures (PROMs) in meta-analyses. We will conduct a methodological survey of a random sample of Cochrane systematic reviews published from 1 January 2015 to 1 April 2017 that report at least one statistically significant pooled result for at least one PRO in the abstract. Author pairs will independently review all titles, abstracts and full texts identified by the literature search, and they will extract data using a standardised data extraction form. We will extract the following: year of publication, number of included trials, number of included participants, clinical area, type of intervention(s) and control(s), type of meta-analysis and use of the Grading of Recommendations, Assessment, Development and Evaluation approach to rate the quality of evidence, as well as information regarding the characteristics of PROMs, calculation and presentation of PROM effect estimates and interpretation of PROM effect estimates. We will document and summarise the methods used for the analysis, reporting and interpretation of each summary effect measure. We will summarise categorical variables with frequencies and percentages and continuous outcomes as means and/or medians and associated measures of dispersion. Ethics approval for this study is not required. We will disseminate the results of this review in peer-reviewed publications and conference presentations. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  18. Experimental and theoretical study of second-order Raman scattering in BaLiF 3

    NASA Astrophysics Data System (ADS)

    Mortier, M.; Gesland, J. Y.; Rousseau, M.

    1994-01-01

    Second-order Raman scattering is evidenced in the inverted cubic fluoroperovskite BaLiF 3. Spectra recorded from 300 to 20 K are interpreted from the thermally weighted two-phonons density-of- states calculated with a rigid ion model.

  19. Statistical Interpretation of the Local Field Inside Dielectrics.

    ERIC Educational Resources Information Center

    Berrera, Ruben G.; Mello, P. A.

    1982-01-01

    Compares several derivations of the Clausius-Mossotti relation to analyze consistently the nature of approximations used and their range of applicability. Also presents a statistical-mechanical calculation of the local field for classical system of harmonic oscillators interacting via the Coulomb potential. (Author/SK)

  20. Application of theoretical models to active and passive remote sensing of saline ice

    NASA Technical Reports Server (NTRS)

    Han, H. C.; Kong, J. A.; Shin, R. T.; Nghiem, S. V.; Kwok, R.

    1992-01-01

    The random medium model is used to interpret the polarimetric active and passive measurements of saline ice. The ice layer is described as a host ice medium embedded with randomly distributed inhomogeneities, and the underlying sea water is considered as a homogeneous half-space. The scatterers in the ice layer are modeled with an ellipsoidal correlation function. The orientation of the scatterers is vertically aligned and azimuthally random. The strong permittivity fluctuation theory is used to calculate the effective permittivity and the distorted Born approximation is used to obtain the polarimetric scattering coefficients. Thermal emissions based on the reciprocity and energy conservation principles are calculated. The effects of the random roughness at the air-ice, and ice-water interfaces are explained by adding the surface scattering to the volume scattering return incoherently. The theoretical model, which has been successfully applied to analyze the radar backscatter data of first-year sea ice, is used to interpret the measurements performed in the Cold Regions Research and Engineering Laboratory's CRRELEX program.

  1. Constraining 17O and 27Al NMR spectra of high-pressure crystals and glasses: New data for jadeite, pyrope, grossular, and mullite

    USGS Publications Warehouse

    Kelsey, K.E.; Stebbins, J.F.; Du, L.-S.; Hankins, B.

    2007-01-01

    The 17O NMR spectra of glasses quenched from melts at high pressure are often difficult to interpret due to overlapping peaks and lack of crystalline model compounds. High-pressure aluminosilicate glasses often contain significant amounts of [5]Al and [6]Al, thus these high-pressure glasses must contain oxygen bonded to high-coordinated aluminum. The 17O NMR parameters for the minerals jadeite, pyrope, grossular, and mullite are presented to assist interpretation of glass spectra and to help test quantum chemical calculations. The 17O NMR parameters for jadeite and grossular support previous peak assignments of oxygen bonded to Si and high-coordinated Al in high-pressure glasses as well as quantum chemical calculations. The oxygen tricluster in mullite is very similar to the previously observed tricluster in grossite (CaAl4 O7) and suspected triclusters in glasses. We also present 27Al NMR spectra for pyrope, grossular, and mullite.

  2. [Interpreting change scores of the Behavioural Rating Scale for Geriatric Inpatients (GIP)].

    PubMed

    Diesfeldt, H F A

    2013-09-01

    The Behavioural Rating Scale for Geriatric Inpatients (GIP) consists of fourteen, Rasch modelled subscales, each measuring different aspects of behavioural, cognitive and affective disturbances in elderly patients. Four additional measures are derived from the GIP: care dependency, apathy, cognition and affect. The objective of the study was to determine the reproducibility of the 18 measures. A convenience sample of 56 patients in psychogeriatric day care was assessed twice by the same observer (a professional caregiver). The median time interval between rating occasions was 45 days (interquartile range 34-58 days). Reproducibility was determined by calculating intraclass correlation coefficients (ICC agreement) for test-retest reliability. The minimal detectable difference (MDD) was calculated based on the standard error of measurement (SEM agreement). Test-retest reliability expressed by the ICCs varied from 0.57 (incoherent behaviour) to 0.93 (anxious behaviour). Standard errors of measurement varied from 0.28 (anxious behaviour) to 1.63 (care dependency). The results show how the GIP can be applied when interpreting individual change in psychogeriatric day care participants.

  3. Structural and spectroscopic investigation of glycinium oxalurate

    NASA Astrophysics Data System (ADS)

    Kavitha, T.; Pasupathi, G.; Marchewka, M. K.; Anbalagan, G.; Kanagathara, N.

    2017-09-01

    Glycinium oxalurate (GO) single crystals has been synthesized and grown by the slow solvent evaporation method at room temperature. Single crystal X-ray diffraction study confirms that GO crystal crystallizes in the monoclinic system with centrosymmetric space group P121/c1. The grown crystals are built up from single protonated glycinium residues and single dissociated oxalurate anions. A combination of ionic and donor-acceptor hydrogen-bond interactions linking together the glycine and oxaluric acid residues forms a three-dimensional network. Hydrogen bonded network present in the crystal gives notable vibrational effect. The molecular geometry, vibrational frequencies and intensity of the vibrational bands have been interpreted with the aid of structure optimization based on HF and density functional theory B3LYP methods with 6-311++G(d,p) basis set. Frontier molecular orbital energies and other related electronic properties are calculated. The natural bonding orbital (NBO) charges have been calculated and interpreted. The molecular electrostatic potential map has been constructed and discussed in detail.

  4. Abdominal 64-MDCT for suspected appendicitis: the use of oral and IV contrast material versus IV contrast material only.

    PubMed

    Anderson, Stephan W; Soto, Jorge A; Lucey, Brian C; Ozonoff, Al; Jordan, Jacqueline D; Ratevosian, Jirair; Ulrich, Andrew S; Rathlev, Niels K; Mitchell, Patricia M; Rebholz, Casey; Feldman, James A; Rhea, James T

    2009-11-01

    The objective of our study was to compare the diagnostic accuracy of IV contrast-enhanced 64-MDCT with and without the use of oral contrast material in diagnosing appendicitis in patients with abdominal pain. We conducted a randomized trial of a convenience sample of adult patients presenting to an urban academic emergency department with acute nontraumatic abdominal pain and clinical suspicion of appendicitis, diverticulitis, or small-bowel obstruction. Patients were enrolled between 8 am and 11 pm when research assistants were present. Consenting subjects were randomized into one of two groups: Group 1 subjects underwent 64-MDCT performed with oral and IV contrast media and group 2 subjects underwent 64-MDCT performed solely with IV contrast material. Three expert radiologists independently reviewed the CT examinations, evaluating for the presence of appendicitis. Each radiologist interpreted 202 examinations, ensuring that each examination was interpreted by two radiologists. Individual reader performance and a combined interpretation performance of the two readers assigned to each case were calculated. In cases of disagreement, the third reader was asked to deliver a tiebreaker interpretation to be used to calculate the combined reader performance. Final outcome was based on operative, clinical, and follow-up data. We compared radiologic diagnoses with clinical outcomes to calculate the diagnostic accuracy of CT in both groups. Of the 303 patients enrolled, 151 patients (50%) were randomized to group 1 and the remaining 152 (50%) were randomized to group 2. The combined reader performance for the diagnosis of appendicitis in group 1 was a sensitivity of 100% (95% CI, 76.8-100%) and specificity of 97.1% (95% CI, 92.7-99.2%). The performance in group 2 was a sensitivity of 100% (73.5-100%) and specificity of 97.1% (92.9-99.2%). Patients presenting with nontraumatic abdominal pain imaged using 64-MDCT with isotropic reformations had similar characteristics for the diagnosis of appendicitis when IV contrast material alone was used and when oral and IV contrast media were used.

  5. Model-independent analysis of semileptonic B decays to D** for arbitrary new physics

    NASA Astrophysics Data System (ADS)

    Bernlochner, Florian U.; Ligeti, Zoltan; Robinson, Dean J.

    2018-04-01

    We explore semileptonic B decays to the four lightest excited charm mesons, D**={D0*,D1* ,D1 ,D2*} , for nonzero charged lepton mass and for all b →c ℓν ¯ four-Fermi interactions, including calculation of the O (ΛQCD/mc ,b) and O (αs) corrections to the heavy quark limit for all form factors. In the heavy quark limit, some form factors are suppressed at zero recoil; therefore, the O (ΛQCD/mc ,b) corrections can be very important. The D** rates exhibit sensitivities to new physics in b →c τ ν ¯ mediated decays complementary to the D and D* modes. Since they are also important backgrounds to B →D(*)τ ν ¯, the correct interpretation of future semitauonic B →D(*) rate measurements requires consistent treatment of both the D** backgrounds and the signals. Our results allow more precise and more reliable calculations of these B →D**ℓν ¯ decays and are systematically improvable by better data on the e and μ modes. As an example, we show that the D** rates are more sensitive to a new c ¯ σμ νb tensor interaction than the D(*) rates.

  6. Through-the-Wall Localization of a Moving Target by Two Independent Ultra Wideband (UWB) Radar Systems

    PubMed Central

    Kocur, Dušan; Švecová, Mária; Rovňáková, Jana

    2013-01-01

    In the case of through-the-wall localization of moving targets by ultra wideband (UWB) radars, there are applications in which handheld sensors equipped only with one transmitting and two receiving antennas are applied. Sometimes, the radar using such a small antenna array is not able to localize the target with the required accuracy. With a view to improve through-the-wall target localization, cooperative positioning based on a fusion of data retrieved from two independent radar systems can be used. In this paper, the novel method of the cooperative localization referred to as joining intersections of the ellipses is introduced. This method is based on a geometrical interpretation of target localization where the target position is estimated using a properly created cluster of the ellipse intersections representing potential positions of the target. The performance of the proposed method is compared with the direct calculation method and two alternative methods of cooperative localization using data obtained by measurements with the M-sequence UWB radars. The direct calculation method is applied for the target localization by particular radar systems. As alternative methods of cooperative localization, the arithmetic average of the target coordinates estimated by two single independent UWB radars and the Taylor series method is considered. PMID:24021968

  7. Through-the-wall localization of a moving target by two independent ultra wideband (UWB) radar systems.

    PubMed

    Kocur, Dušan; Svecová, Mária; Rovňáková, Jana

    2013-09-09

    In the case of through-the-wall localization of moving targets by ultra wideband (UWB) radars, there are applications in which handheld sensors equipped only with one transmitting and two receiving antennas are applied. Sometimes, the radar using such a small antenna array is not able to localize the target with the required accuracy. With a view to improve through-the-wall target localization, cooperative positioning based on a fusion of data retrieved from two independent radar systems can be used. In this paper, the novel method of the cooperative localization referred to as joining intersections of the ellipses is introduced. This method is based on a geometrical interpretation of target localization where the target position is estimated using a properly created cluster of the ellipse intersections representing potential positions of the target. The performance of the proposed method is compared with the direct calculation method and two alternative methods of cooperative localization using data obtained by measurements with the M-sequence UWB radars. The direct calculation method is applied for the target localization by particular radar systems. As alternative methods of cooperative localization, the arithmetic average of the target coordinates estimated by two single independent UWB radars and the Taylor series method is considered.

  8. Psychostimulant and sensory stimulation interventions that target the reading and math deficits of students with ADHD.

    PubMed

    Zentall, Sydney S; Tom-Wright, Kinsey; Lee, Jiyeon

    2013-05-01

    The purpose of this review of students with attention deficit hyperactivity disorder (ADHD) was to summarize the following: (1) academic deficits in math and reading, (2) possible theoretical contributors to these deficits, and (3) psychostimulant interventions that target math and reading, as well as, parallel interventions involving sensory stimulation. A comprehensive examination of the literature was conducted on children with ADHD with and without co-occurring disabilities, summarizing their reading and math achievement and the effects of psychostimulant and sensory stimulant interventions on these academic areas. Students without co-occurring disabilities (ADHD-) had fewer deficits in reading than in math and than students with co-occurring disabilities (ADHD+). Furthermore, students with ADHD+ demonstrated greater responsiveness to psychostimulants through improved reading recognition and math calculations, with limited gains in literal reading comprehension. Added sensory stimulation produced differential gains for both groups in reading recognition and comprehension and in math calculations and problem solving. The efficacy of psychostimulants was documented on specific areas of achievement for the ADHD+ group, but this review did not support the administration of psychostimulants for students with ADHD-. For both groups of students, differential gains, losses, and habituation were documented in response to sensory stimulation for both subareas within reading and math, which were interpreted as support for the optimal stimulation theory.

  9. Minimising human error in malaria rapid diagnosis: clarity of written instructions and health worker performance.

    PubMed

    Rennie, Waverly; Phetsouvanh, Rattanaxay; Lupisan, Socorro; Vanisaveth, Viengsay; Hongvanthong, Bouasy; Phompida, Samlane; Alday, Portia; Fulache, Mila; Lumagui, Richard; Jorgensen, Pernille; Bell, David; Harvey, Steven

    2007-01-01

    The usefulness of rapid diagnostic tests (RDT) in malaria case management depends on the accuracy of the diagnoses they provide. Despite their apparent simplicity, previous studies indicate that RDT accuracy is highly user-dependent. As malaria RDTs will frequently be used in remote areas with little supervision or support, minimising mistakes is crucial. This paper describes the development of new instructions (job aids) to improve health worker performance, based on observations of common errors made by remote health workers and villagers in preparing and interpreting RDTs, in the Philippines and Laos. Initial preparation using the instructions provided by the manufacturer was poor, but improved significantly with the job aids (e.g. correct use both of the dipstick and cassette increased in the Philippines by 17%). However, mistakes in preparation remained commonplace, especially for dipstick RDTs, as did mistakes in interpretation of results. A short orientation on correct use and interpretation further improved accuracy, from 70% to 80%. The results indicate that apparently simple diagnostic tests can be poorly performed and interpreted, but provision of clear, simple instructions can reduce these errors. Preparation of appropriate instructions and training as well as monitoring of user behaviour are an essential part of rapid test implementation.

  10. Setting the equation: establishing value in spine care.

    PubMed

    Resnick, Daniel K; Tosteson, Anna N A; Groman, Rachel F; Ghogawala, Zoher

    2014-10-15

    Topic review. Describe value measurement in spine care and discuss the motivation for, methods for, and limitations of such measurement. Spinal disorders are common and are an important cause of pain and disability. Numerous complementary and competing treatment strategies are used to treat spinal disorders, and the costs of these treatments is substantial and continue to rise despite clear evidence of improved health status as a result of these expenditures. The authors present the economic and legislative imperatives forcing the assessment of value in spine care. The definition of value in health care and methods to measure value specifically in spine care are presented. Limitations to the utility of value judgments and caveats to their use are presented. Examples of value calculations in spine care are presented and critiqued. Methods to improve and broaden the measurement of value across spine care are suggested, and the role of prospective registries in measuring value is discussed. Value can be measured in spine care through the use of appropriate economic measures and patient-reported outcomes measures. Value must be interpreted in light of the perspective of the assessor, the duration of the assessment period, the degree of appropriate risk stratification, and the relative value of treatment alternatives.

  11. Improved estimates of Belgian private health expenditure can give important lessons to other OECD countries.

    PubMed

    Calcoen, Piet; Moens, Dirk; Verlinden, Pieter; van de Ven, Wynand P M M; Pacolet, Jozef

    2015-03-01

    OECD Health Data are a well-known source for detailed information about health expenditure. These data enable us to analyze health policy issues over time and in comparison with other countries. However, current official Belgian estimates of private expenditure (as published in the OECD Health Data) have proven not to be reliable. We distinguish four potential major sources of problems with estimating private health spending: interpretation of definitions, formulation of assumptions, missing or incomplete data and incorrect data. Using alternative sources of billing information, we have reached more accurate estimates of private and out-of-pocket expenditure. For Belgium we found differences of more than 100% between our estimates and the official Belgian estimates of private health expenditure (as published in the OECD Health Data). For instance, according to OECD Health Data private expenditure on hospitals in Belgium amounts to €3.1 billion, while according to our alternative calculations these expenses represent only €1.1 billion. Total private expenditure differs only 1%, but this is a mere coincidence. This exercise may be of interest to other OECD countries looking to improve their estimates of private expenditure on health. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Reasoning strategies used by students to solve stoichiometry problems and its relationship to alternative conceptions, prior knowledge, and cognitive variables

    NASA Astrophysics Data System (ADS)

    de Astudillo, Luisa Rojas; Niaz, Mansoor

    1996-06-01

    Achievement in science depends on a series of factors that characterize the cognitive abilities of the students and the complex interactions between these factors and the environment that intervenes in the formation of students' background. The objective of this study is to: a) investigate reasoning strategies students use in solving stoichiometric problems; b) explore the relation between these strategies and alternative conceptions, prior knowledge and cognitive variables; and c) interpret the results within an epistemological framework. Results obtained show how stoichiometric relations produce conflicting situations for students, leading to conceptual misunderstanding of concepts, such as mass, atoms and moles. The wide variety of strategies used by students attest to the presence of competing and conflicting frameworks (progressive transitions, cf. Lakatos, 1970), leading to greater conceptual understanding. It is concluded that the methodology developed in this study (based on a series of closely related probing questions, generally requiring no calculations, that elicit student conceptual understanding to varying degrees within an intact classroom context) was influential in improving student performance. This improvement in performance, however, does not necessarily affect students' hard core of beliefs.

  13. Electrocardiogram signal denoising based on a new improved wavelet thresholding

    NASA Astrophysics Data System (ADS)

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method.

  14. Abstract Interpreters for Free

    NASA Astrophysics Data System (ADS)

    Might, Matthew

    In small-step abstract interpretations, the concrete and abstract semantics bear an uncanny resemblance. In this work, we present an analysis-design methodology that both explains and exploits that resemblance. Specifically, we present a two-step method to convert a small-step concrete semantics into a family of sound, computable abstract interpretations. The first step re-factors the concrete state-space to eliminate recursive structure; this refactoring of the state-space simultaneously determines a store-passing-style transformation on the underlying concrete semantics. The second step uses inference rules to generate an abstract state-space and a Galois connection simultaneously. The Galois connection allows the calculation of the "optimal" abstract interpretation. The two-step process is unambiguous, but nondeterministic: at each step, analysis designers face choices. Some of these choices ultimately influence properties such as flow-, field- and context-sensitivity. Thus, under the method, we can give the emergence of these properties a graph-theoretic characterization. To illustrate the method, we systematically abstract the continuation-passing style lambda calculus to arrive at two distinct families of analyses. The first is the well-known k-CFA family of analyses. The second consists of novel "environment-centric" abstract interpretations, none of which appear in the literature on static analysis of higher-order programs.

  15. How lay people understand and make sense of personalized disease risk information.

    PubMed

    Damman, Olga C; Bogaerts, Nina M M; van den Haak, Maaike J; Timmermans, Danielle R M

    2017-10-01

    Disease risk calculators are increasingly web-based, but previous studies have shown that risk information often poses problems for lay users. To examine how lay people understand the result derived from an online cardiometabolic risk calculator. A qualitative study was performed, using the risk calculator in the Dutch National Prevention Program for cardiometabolic diseases. The study consisted of three parts: (i) attention: completion of the risk calculator while an eye tracker registered eye movements; (ii) recall: completion of a recall task; and (iii) interpretation: participation in a semi-structured interview. We recruited people from the target population through an advertisement in a local newspaper; 16 people participated in the study, which took place in our university laboratory. Eye-tracking data showed that participants looked most extensively at numerical risk information. Percentages were recalled well, whereas natural frequencies and verbal labels were remembered less well. Five qualitative themes were derived from the interview data: (i) numerical information does not really sink in; (ii) the verbal categorical label made no real impact on people; (iii) people relied heavily on existing knowledge and beliefs; (iv) people zoomed in on risk factors, especially family history of diseases; and (v) people often compared their situation to that of their peers. Although people paid attention to and recalled the risk information to a certain extent, they seemed to have difficulty in properly using this information for interpreting their risk. © 2017 The Authors Health Expectations Published by John Wiley & Sons Ltd.

  16. Practical applications of internal dose calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carbaugh, E.H.

    1994-06-01

    Accurate estimates of intake magnitude and internal dose are the goal for any assessment of an actual intake of radioactivity. When only one datum is available on which to base estimates, the choices for internal dose assessment become straight-forward: apply the appropriate retention or excretion function, calculate the intake, and calculate the dose. The difficulty comes when multiple data and different types of data become available. Then practical decisions must be made on how to interpret conflicting data, or how to adjust the assumptions and techniques underlying internal dose assessments to give results consistent with the data. This article describesmore » nine types of adjustments which can be incorporated into calculations of intake and internal dose, and then offers several practical insights to dealing with some real-world internal dose puzzles.« less

  17. Molecular structure and vibrational spectra of Irinotecan: a density functional theoretical study.

    PubMed

    Chinna Babu, P; Sundaraganesan, N; Sudha, S; Aroulmoji, V; Murano, E

    2012-12-01

    The solid phase FTIR and FT-Raman spectra of Irinotecan have been recorded in the regions 400-4000 and 50-4000 cm(-1), respectively. The spectra were interpreted in terms of fundamentals modes, combination and overtone bands. The structure of the molecule was optimized and the structural characteristics were determined by density functional theory (DFT) using B3LYP method with 6-31G(d) as basis set. The vibrational frequencies were calculated for Irinotecan by DFT method and were compared with the experimental frequencies, which yield good agreement between observed and calculated frequencies. The infrared spectrum was also simulated from the calculated intensities. Besides, molecular electrostatic potential (MEP), frontier molecular orbitals (FMO) analysis were investigated using theoretical calculations. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Does English proficiency impact on health outcomes for inpatients undergoing stroke rehabilitation?

    PubMed

    Davies, Sarah E; Dodd, Karen J; Tu, April; Zucchi, Emiliano; Zen, Stefania; Hill, Keith D

    2016-07-01

    To determine whether English proficiency and/or the frequency of interpreter use impacts on health outcomes for inpatient stroke rehabilitation. Retrospective case-control study. People admitted for inpatient stroke rehabilitation. A high English proficiency group comprised people with native or near native English proficiency (n = 80), and a low English proficiency group comprised people who preferred a language other than English (n = 80). Length of stay (LOS), discharge destination and Functional Independence Measure (FIM). The low English proficiency group showed a greater improvement in FIM from admission to discharge (p = 0.04). No significant differences were found between groups in LOS, discharge destination and number of encounters with allied health professionals. Increased interpreter usage improved FIM efficiency but did not significantly alter other outcomes. English proficiency does not appear to impact on health outcomes in inpatient rehabilitation with a primarily in-house professional interpreter service. However, there is a need for a larger powered study to confirm these findings. Implications for rehabilitation People with low English proficiency undergoing inpatient stroke rehabilitation in a setting with a primarily in-house professional interpreter service, achieved similar outcomes to those with high English proficiency irrespective of frequency of interpreter usage. A non-significant increase of 4 days length of stay was observed in the low English proficiency group compared to the high English proficiency group. For patients with low English proficiency, greater change in Functional Independence Measure efficiency scores was observed for those with higher levels of interpreter use relative to those with low interpreter use. Clinicians should optimise use of interpreters with patients with low English proficiency when possible.

  19. Data Systems and Reports as Active Participants in Data Interpretation

    ERIC Educational Resources Information Center

    Rankin, Jenny Grant

    2016-01-01

    Most data-informed decision-making in education is undermined by flawed interpretations. Educator-driven interventions to improve data use are beneficial but not omnipotent, as data misunderstandings persist at schools and school districts commended for ideal data use support. Meanwhile, most data systems and reports display figures without…

  20. West Virginia Interpretive Guide Training: A Collaborative Effort

    ERIC Educational Resources Information Center

    Balcarczyk, Kelly; McKenney, Kathryn; Smaldone, Dave; Arborgast, Doug

    2013-01-01

    West Virginia University's Extension Service partnered with the Recreation, Parks, and Tourism Resources Program to improve guide performance in West Virginia's tourism industry. The result of this partnership is a West Virginia Interpretive Guide Training program aimed at providing low-cost, widely available training to guides throughout the…

  1. The Effect of Teaching Interlanguage Pragmatics on Interpretation Ability of Iranian Translation Students

    ERIC Educational Resources Information Center

    Ravesh, Mahnaz Mahmoudi; Tabrizi, Hossein Heidari

    2017-01-01

    The present study sought to investigate whether Iranian translation students were successful in comprehending interlanguage pragmatic (ILP) features. Moreover, it tried to figure out whether teaching interlanguage pragmatics proved helpful for the improvement of interpretation ability of Iranian translation students. To this end, 30 students of…

  2. Variable selection with random forest: Balancing stability, performance, and interpretation in ecological and environmental modeling

    EPA Science Inventory

    Random forest (RF) is popular in ecological and environmental modeling, in part, because of its insensitivity to correlated predictors and resistance to overfitting. Although variable selection has been proposed to improve both performance and interpretation of RF models, it is u...

  3. Spectroscopic (FT-IR, FT-Raman, NMR and UV-Visible) and quantum chemical studies of molecular geometry, Frontier molecular orbital, NLO, NBO and thermodynamic properties of salicylic acid.

    PubMed

    Suresh, S; Gunasekaran, S; Srinivasan, S

    2014-11-11

    The solid phase FT-IR and FT-Raman spectra of 2-hydroxybenzoic acid (salicylic acid) have been recorded in the region 4000-400 and 4000-100 cm(-1) respectively. The optimized molecular geometry and fundamental vibrational frequencies are interpreted with the aid of structure optimizations and normal coordinate force field calculations based on density functional theory (DFT) method and a comparative study between Hartree Fork (HF) method at 6-311++G(d,p) level basis set. The calculated harmonic vibrational frequencies are scaled and they are compared with experimentally obtained FT-IR and FT-Raman spectra. A detailed interpretation of the vibrational spectra of this compound has been made on the basis of the calculated potential energy distribution (PED). The time dependent DFT method is employed to predict its absorption energy and oscillator strength. The linear polarizability (α) and the first order hyper polarizability (β) values of the investigated molecule have been computed. The electronic properties, such as HOMO and LUMO energies, molecular electrostatic potential (MEP) are also performed. Stability of the molecule arising from hyper conjugative interaction, charge delocalization has been analyzed using natural bond orbital (NBO) analysis. Published by Elsevier B.V.

  4. A combined experimental (IR, Raman and UV-Vis) and quantum chemical study of canadine

    NASA Astrophysics Data System (ADS)

    Joshi, Bhawani Datt; Srivastava, Anubha; Tandon, Poonam; Jain, Sudha; Ayala, A. P.

    2018-02-01

    Plant based natural products cover a major sector of the medicinal field, as such focus on plant research has been increased all over the world. As an attempt to aid that research, we have performed structural and spectroscopic analysis of a natural product, an alkaloid: canadine. Both ab initio Hartree-Fock (HF) and density functional theory (DFT) employing B3LYP using 6-311 ++G(d,p) basis set were used for the calculations. The calculated vibrational frequencies were scaled and compared with the experimental infrared and Raman spectra. The complete vibrational assignments were made using potential energy distribution. The structure-activity relation has also been interpreted by mapping electrostatic potential surface and evaluating the reactivity descriptors, which are valuable information for quality control of medicines and drug-receptor interactions. Natural bond orbital analysis has also been performed to understand the stability and hyperconjugative interactions of the molecule. Furthermore, UV-Vis spectra have been recorded in an ethanol solvent (EtOH) and the electronic property has been analyzed employing TD-DFT for both gaseous and solvent phase. The HOMO and LUMO calculation with their energy gap show that charge transfer occurs within the molecule. Additionally, the nonlinear optical properties of the title compound have been interpreted that predicts it's the best candidate for the NLO materials.

  5. Calculational investigation of impact cratering dynamics - Early time material motions

    NASA Technical Reports Server (NTRS)

    Thomsen, J. M.; Austin, M. G.; Ruhl, S. F.; Schultz, P. H.; Orphal, D. L.

    1979-01-01

    Early time two-dimensional finite difference calculations of laboratory-scale hypervelocity (6 km/sec) impact of 0.3 g spherical 2024 aluminum projectiles into homogeneous plasticene clay targets were performed and the resulting material motions analyzed. Results show that the initial jetting of vaporized target material is qualitatively similar to experimental observation. The velocity flow field developed within the target is shown to have features quite similar to those found in calculations of near-surface explosion cratering. Specific application of Maxwell's analytic Z-Model (developed to interpret the flow fields of near-surface explosion cratering calculations), shows that this model can be used to describe the flow fields resulting from the impact cratering calculations, provided that the flow field center is located beneath the target surface, and that application of the model is made late enough in time that most of the projectile momentum has been dissipated.

  6. A MATLAB toolbox and Excel workbook for calculating the densities, seismic wave speeds, and major element composition of minerals and rocks at pressure and temperature

    NASA Astrophysics Data System (ADS)

    Abers, Geoffrey A.; Hacker, Bradley R.

    2016-02-01

    To interpret seismic images, rock seismic velocities need to be calculated at elevated pressure and temperature for arbitrary compositions. This technical report describes an algorithm, software, and data to make such calculations from the physical properties of minerals. It updates a previous compilation and Excel® spreadsheet and includes new MATLAB® tools for the calculations. The database of 60 mineral end-members includes all parameters needed to estimate density and elastic moduli for many crustal and mantle rocks at conditions relevant to the upper few hundreds of kilometers of Earth. The behavior of α and β quartz is treated as a special case, owing to its unusual Poisson's ratio and thermal expansion that vary rapidly near the α-β transition. The MATLAB tools allow integration of these calculations into a variety of modeling and data analysis projects.

  7. Molecular structure and vibrational analysis of Trifluoperazine by FT-IR, FT-Raman and UV-Vis spectroscopies combined with DFT calculations.

    PubMed

    Rajesh, P; Gunasekaran, S; Gnanasambandan, T; Seshadri, S

    2015-02-25

    The complete vibrational assignment and analysis of the fundamental vibrational modes of Trifluoperazine (TFZ) was carried out using the experimental FT-IR, FT-Raman and UV-Vis data and quantum chemical studies. The observed vibrational data were compared with the wavenumbers derived theoretically for the optimized geometry of the compound from the DFT-B3LYP gradient calculations employing 6-31G (d,p) basis set. Thermodynamic properties like entropy, heat capacity and enthalpy have been calculated for the molecule. The HOMO-LUMO energy gap has been calculated. The intramolecular contacts have been interpreted using natural bond orbital (NBO) and natural localized molecular orbital (NLMO) analysis. Important non-linear properties such as first hyperpolarizability of TFZ have been computed using B3LYP quantum chemical calculation. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Interpretation of open system petrogenetic processes: Phase equilibria constraints on magma evolution

    NASA Astrophysics Data System (ADS)

    Defant, Marc J.; Nielsen, Roger L.

    1990-01-01

    We have used a computer model (TRACES) to simulate low pressure differentiation of natural basaltic magmas in an attempt to investigate the chemical dynamics of open system magmatic processes. Our results, in the form of simulated liquid lines of descent and the calculated equilibrium mineralogy, were determined for perfect fractional crystallization; fractionation paired with recharge and eruption (PRF); fractionation paired with assimilation (AFC); and fractionation paired with recharge, eruption, and assimilation (FEAR). These simulations were calculated in an attempt to assess the effects of combinations of petrogenetic processes on major and trace element evolution of natural systems and to test techniques that have been used to decipher the relative roles of these processes. If the results of PRF calculations are interpreted in terms of a mass balance based fractionation model (e.g., Bryan et al., 1969), it is possible to generate low residuals even if one assumes that fractional crystallization was the only active process. In effect, the chemical consequences of recharge are invisible to mass balance models. Pearce element ratio analyses, however, can effectively discern the effects of PRF versus simple fractionation. The fractionating mineral proportions, and therefore, bulk distribution coefficients ( D¯) of a differentiating system are dependent on the recharge or assimilation rate. Comparison of the results of simulations assuming constant D¯ with the results calculated by TRACES show that the steady state liquid concentrations of some elements can differ by a factor of 2 to 5. If the PRF simulation is periodic, with episodes of mixing separated by intervals of fractionation, parallel liquidus mineral control lines are produced. Most of these control lines do not project back to the parental composition. This must be an important consideration when attempting to calculate a potential parental magma for any natural suite where magma chamber recharge has occurred. Most basaltic magmas cannot evolve to high silica compositions without magnetite fractionation. Small amounts of rhyolite assimilation (assimilation/fractionation < 0.1), however, can drive evolving basalts to more silica rich compositions. If mass balance models are used to interpret these synthetic AFC data, low residuals are obtained if magnetite is added to the crystallizing assemblage. This approach works even for cases where magnetite was not a fractionating phase. Thus, the mass balance results are mathematically correct, but are geologically irrelevant.

  9. Improving child welfare performance through supervisory use of client outcome data.

    PubMed

    Moore, T D; Rapp, C A; Roberts, B

    2000-01-01

    Despite their benefits, there is little evidence that outcome data are being widely used by program managers or field level supervisors. Three interdependent factors that facilitate the use of outcome data are well-constructed reports, and organizational culture that supports learning and outcome achievement, and managerial skills in interpreting data and taking relevant action. This article describes an outcome reporting package and training oriented toward frontline supervisors to help them use outcome data, shape a learning culture, interpret data, and take focused action toward improving outcomes for children and families.

  10. Interpreting activity in H(2)O-H(2)SO(4) binary nucleation.

    PubMed

    Bein, Keith J; Wexler, Anthony S

    2007-09-28

    Sulfuric acid-water nucleation is thought to be a key atmospheric mechanism for forming new condensation nuclei. In earlier literature, measurements of sulfuric acid activity were interpreted as the total (monomer plus hydrate) concentration above solution. Due to recent reinterpretations, most literature values for H(2)SO(4) activity are thought to represent the number density of monomers. Based on this reinterpretation, the current work uses the most recent models of H(2)O-H(2)SO(4) binary nucleation along with perturbation analyses to predict a decrease in critical cluster mole fraction, increase in critical cluster diameter, and orders of magnitude decrease in nucleation rate. Nucleation rate parameterizations available in the literature, however, give opposite trends. To resolve these discrepancies, nucleation rates were calculated for both interpretations of H(2)SO(4) activity and directly compared to the available parameterizations as well as the perturbation analysis. Results were in excellent agreement with older parameterizations that assumed H(2)SO(4) activity represents the total concentration and duplicated the predicted trends from the perturbation analysis, but differed by orders of magnitude from more recent parameterizations that assume H(2)SO(4) activity represents only the monomer. Comparison with experimental measurements available in the literature revealed that the calculations of the current work assuming a(a) represents the total concentration are most frequently in agreement with observations.

  11. A Method for Improved Interpretation of "Spot" Biomarker Data ...

    EPA Pesticide Factsheets

    A Method for Improved Interpretation of "Spot" Biomarker Data The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  12. Interpretation of nitric oxide profile observed in January 1992 over Kiruna

    NASA Astrophysics Data System (ADS)

    Kondo, Y.; Kawa, S. R.; Lary, D.; Sugita, T.; Douglass, Anne R.; Lutman, E.; Koike, M.; Deshler, T.

    1996-05-01

    NO mixing ratios measured from Kiruna (68°N, 20°E), Sweden, on January 22, 1992, revealed values much smaller than those observed at midlatitude near equinox and had a sharper vertical gradient around 25 km. Location of the measurements was close to the terminator and near the edge of the polar vortex, which is highly distorted from concentric flow by strong planetary wave activities. These conditions necessitate accurate calculation, properly taking into account the transport and photochemical processes, in order to quantitatively explain the observed NO profile. A three-dimensional chemistry and transport model (CTM) and a trajectory model (TM) were used to interpret the profile observations within their larger spatial, temporal, and chemical context. The NOy profile calculated by the CTM is in good agreement with that observed on January 31, 1992. In addition, model NOy profiles show small variabilities depending on latitudes, and they change little between January 22 and 31. The TM uses the observed NOy values. The NO values calculated by the CTM and TM agree with observations up to 27 km. Between 20 and 27 km the NO values calculated by the trajectory model including only gas phase chemistry are much larger than those including heterogeneous chemistry, indicating that NO mixing ratios were reduced significantly by heterogeneous chemistry on sulfuric acid aerosols. Very little sunlight to generate NOx from HNO3 was available, also causing the very low NO values. The good agreement between the observed and modeled NO profiles indicates that models can reproduce the photochemical and transport processes in the region where NO values have a sharp horizontal gradient. Moreover, CTM and TM model results show that even when the NOy gradients are weak, the model NO depends upon accurate calculation of the transport and insolation for several days.

  13. Gravimetric surveys for assessing rock mass condition around a mine shaft

    NASA Astrophysics Data System (ADS)

    Madej, Janusz

    2017-06-01

    The fundamentals of use of vertical gravimetric surveying method in mine shafts are presented in the paper. The methods of gravimetric measurements and calculation of interval and complex density are discussed in detail. The density calculations are based on an original method accounting for the gravity influence of the mine shaft thus guaranteeing closeness of calculated and real values of density of rocks beyond the shaft lining. The results of many gravimetric surveys performed in shafts are presented and interpreted. As a result, information about the location of heterogeneous zones of work beyond the shaft lining is obtained. In many cases, these zones used to threaten the safe operation of machines and utilities in the shaft.

  14. Experiences of Kurdish war-wounded refugees in communication with Swedish authorities through interpreter.

    PubMed

    Fatahi, Nabi; Nordholm, Lena; Mattsson, Bengt; Hellström, Mikael

    2010-02-01

    To study experiences of war-wounded Kurdish refugees with respect to cross-cultural communication through interpreters. Semi-structured interviews were conducted with ten men, aged 31-42. Content analysis was used for analysis and interpretation of data. War-wounded Kurdish refugees experienced a number of difficulties regarding communication through interpreters, mainly related to the insufficient language link to the Swedish authorities, particularly health care personnel. In many instances, interpreters were selected based on the immigrant's citizenship rather than mother tongue, leading to a more complex, tri-lingual interpretation situation. Differences in cultural background, fear, suspicion and lack of confidence in interpreters were addressed as other problems by the participants. Interpreter competence and patient confidence in the interpreter are essential for an adequate cross-cultural health communication. Assignment of interpreters should be based on knowledge of the patient's/client's mother tongue, rather than citizenship, and the outcome is improved by a common ethnic and cultural background of interpreter and patient/client. Our study should be considered as a pilot study, and the results should be validated in larger cohorts as well as in other ethnic and language groups. In order to minimize communication misunderstandings, complicated tri-lingual interpretation situations should be avoided. Interpreters should ideally be assigned according to patient's/client's mother tongue rather than citizenship. Interpreters' competence and patient's/client's confidence in interpreter may have significant impact on communication outcome. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  15. An Improved 3D Joint Inversion Method of Potential Field Data Using Cross-Gradient Constraint and LSQR Method

    NASA Astrophysics Data System (ADS)

    Joulidehsar, Farshad; Moradzadeh, Ali; Doulati Ardejani, Faramarz

    2018-06-01

    The joint interpretation of two sets of geophysical data related to the same source is an appropriate method for decreasing non-uniqueness of the resulting models during inversion process. Among the available methods, a method based on using cross-gradient constraint combines two datasets is an efficient approach. This method, however, is time-consuming for 3D inversion and cannot provide an exact assessment of situation and extension of anomaly of interest. In this paper, the first attempt is to speed up the required calculation by substituting singular value decomposition by least-squares QR method to solve the large-scale kernel matrix of 3D inversion, more rapidly. Furthermore, to improve the accuracy of resulting models, a combination of depth-weighing matrix and compacted constraint, as automatic selection covariance of initial parameters, is used in the proposed inversion algorithm. This algorithm was developed in Matlab environment and first implemented on synthetic data. The 3D joint inversion of synthetic gravity and magnetic data shows a noticeable improvement in the results and increases the efficiency of algorithm for large-scale problems. Additionally, a real gravity and magnetic dataset of Jalalabad mine, in southeast of Iran was tested. The obtained results by the improved joint 3D inversion of cross-gradient along with compacted constraint showed a mineralised zone in depth interval of about 110-300 m which is in good agreement with the available drilling data. This is also a further confirmation on the accuracy and progress of the improved inversion algorithm.

  16. Protocol for determining the diagnostic validity of physical examination maneuvers for shoulder pathology.

    PubMed

    Somerville, Lyndsay; Bryant, Dianne; Willits, Kevin; Johnson, Andrew

    2013-02-08

    Shoulder complaints are the third most common musculoskeletal problem in the general population. There are an abundance of physical examination maneuvers for diagnosing shoulder pathology. The validity of these maneuvers has not been adequately addressed. We propose a large Phase III study to investigate the accuracy of these tests in an orthopaedic setting. We will recruit consecutive new shoulder patients who are referred to two tertiary orthopaedic clinics. We will select which physical examination tests to include using a modified Delphi process. The physician will take a thorough history from the patient and indicate their certainty about each possible diagnosis (certain the diagnosis is absent, present or requires further testing). The clinician will only perform the physical examination maneuvers for diagnoses where uncertainty remains. We will consider arthroscopy the reference standard for patients who undergo surgery within 8 months of physical examination and magnetic resonance imaging with arthrogram for patients who do not. We will calculate the sensitivity, specificity and positive and negative likelihood ratios and investigate whether combinations of the top tests provide stronger predictions of the presence or absence of disease. There are several considerations when performing a diagnostic study to ensure that the results are applicable in a clinical setting. These include, 1) including a representative sample, 2) selecting an appropriate reference standard, 3) avoiding verification bias, 4) blinding the interpreters of the physical examination tests to the interpretation of the gold standard and, 5) blinding the interpreters of the gold standard to the interpretation of the physical examination tests. The results of this study will inform clinicians of which tests, or combination of tests, successfully reduce diagnostic uncertainty, which tests are misleading and how physical examination may affect the magnitude of the confidence the clinician feels about their diagnosis. The results of this study may reduce the number of costly and invasive imaging studies (MRI, CT or arthrography) that are requisitioned when uncertainty about diagnosis remains following history and physical exam. We also hope to reduce the variability between specialists in which maneuvers are used during physical examination and how they are used, all of which will assist in improving consistency of care between centres.

  17. Application of field geophysics in geomorphology: Advances and limitations exemplified by case studies

    NASA Astrophysics Data System (ADS)

    Schrott, Lothar; Sass, Oliver

    2008-01-01

    During the last decade, the use of geophysical techniques has become popular in many geomorphological studies. However, the correct handling of geophysical instruments and the subsequent processing of the data they yield are difficult tasks. Furthermore, the description and interpretation of geomorphological settings to which they apply can significantly influence the data gathering and subsequent modelling procedure ( e.g. achieving a maximum depth of 30 m requires a certain profile length and geophone spacing or a particular frequency of antenna). For more than three decades geophysical techniques have been successfully applied, for example, in permafrost studies. However, in many cases complex or more heterogeneous subsurface structures could not be adequately interpreted due to limited computer facilities and time consuming calculations. As a result of recent technical improvements, geophysical techniques have been applied to a wider spectrum of geomorphological and geological settings. This paper aims to present some examples of geomorphological studies that demonstrate the powerful integration of geophysical techniques and highlight some of the limitations of these techniques. A focus has been given to the three most frequently used techniques in geomorphology to date, namely ground-penetrating radar, seismic refraction and DC resistivity. Promising applications are reported for a broad range of landforms and environments, such as talus slopes, block fields, landslides, complex valley fill deposits, karst and loess covered landforms. A qualitative assessment highlights suitable landforms and environments. The techniques can help to answer yet unsolved questions in geomorphological research regarding for example sediment thickness and internal structures. However, based on case studies it can be shown that the use of a single geophysical technique or a single interpretation tool is not recommended for many geomorphological surface and subsurface conditions as this may lead to significant errors in interpretation. Because of changing physical properties of the subsurface material ( e.g. sediment, water content) in many cases only a combination of two or sometimes even three geophysical methods gives sufficient insight to avoid serious misinterpretation. A "good practice guide" has been framed that provides recommendations to enable the successful application of three important geophysical methods in geomorphology and to help users avoid making serious mistakes.

  18. Using 3D visualization and seismic attributes to improve structural and stratigraphic resolution of reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerr, J.; Jones, G.L.

    1996-01-01

    Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting andmore » detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.« less

  19. Using 3D visualization and seismic attributes to improve structural and stratigraphic resolution of reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerr, J.; Jones, G.L.

    1996-12-31

    Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting andmore » detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.« less

  20. A randomized control trial comparing use of a novel electrocardiogram simulator with traditional teaching in the acquisition of electrocardiogram interpretation skill.

    PubMed

    Fent, Graham; Gosai, Jivendra; Purva, Makani

    2016-01-01

    Accurate interpretation of the electrocardiogram (ECG) remains an essential skill for medical students and junior doctors. While many techniques for teaching ECG interpretation are described, no single method has been shown to be superior. This randomized control trial is the first to investigate whether teaching ECG interpretation using a computer simulator program or traditional teaching leads to improved scores in a test of ECG interpretation among medical students and postgraduate doctors immediately after and 3months following teaching. Participants' opinions of the program were assessed using a questionnaire. There were no differences in ECG interpretation test scores immediately after or 3months after teaching in the lecture or simulator groups. At present therefore, there is insufficient evidence to suggest that ECG simulator programs are superior to traditional teaching. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Determining the Kinetic Parameters Characteristic of Microalgal Growth.

    ERIC Educational Resources Information Center

    Martinez Sancho, Maria Eugenie; And Others

    1991-01-01

    An activity in which students obtain a growth curve for algae, identify the exponential and linear growth phases, and calculate the parameters which characterize both phases is described. The procedure, a list of required materials, experimental conditions, analytical technique, and a discussion of the interpretations of individual results are…

  2. When 95% Accurate Isn't: Exploring Bayes's Theorem

    ERIC Educational Resources Information Center

    CadwalladerOlsker, Todd D.

    2011-01-01

    Bayes's theorem is notorious for being a difficult topic to learn and to teach. Problems involving Bayes's theorem (either implicitly or explicitly) generally involve calculations based on two or more given probabilities and their complements. Further, a correct solution depends on students' ability to interpret the problem correctly. Most people…

  3. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    ERIC Educational Resources Information Center

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  4. Applying Descriptive Statistics to Teaching the Regional Classification of Climate.

    ERIC Educational Resources Information Center

    Lindquist, Peter S.; Hammel, Daniel J.

    1998-01-01

    Describes an exercise for college and high school students that relates descriptive statistics to the regional climatic classification. The exercise introduces students to simple calculations of central tendency and dispersion, the construction and interpretation of scatterplots, and the definition of climatic regions. Forces students to engage…

  5. Evaluation of toxicity equivalent calculations for use with data from in vitro aromatase inhibition assays

    EPA Science Inventory

    With growing investment in alternatives to traditional animal toxicity tests, the next generation of risk assessment must interpret new streams of data to identify hazards and protect humans and wildlife populations. If the effects of a chemical can be characterized by a battery...

  6. Environmental--Access to Safe Water Learning Module. Development Education Program.

    ERIC Educational Resources Information Center

    World Bank, Washington, DC.

    This learning module has two main goals: (1) to increase students' knowledge and understanding of the often complex relationship between sustainable development and the social, economic, and environmental conditions in a country; and (2) to strengthen students' ability to perform statistical calculations, make and interpret maps, charts, and…

  7. Gravity/Topography Admittances and Lithospheric Evolution on Mars: The Importance of Finite-Amplitude Topography

    NASA Technical Reports Server (NTRS)

    McGovern, Patrick J.; Solomon, Sean C.; Smith, David E.; Zuber, Maria T.; Neumann, Gregory A.; Head, J. W., III; Phillips, Roger J.; Simons, Mark

    2001-01-01

    We calculate localized gravity/topography admittances for Mars, in order to estimate elastic lithosphere thickness. A finite-amplitude correction to modeled gravity is required to properly interpret admittances in high-relief regions of Mars. Additional information is contained in the original extended abstract.

  8. Soil Studies: Applying Acid-Base Chemistry to Environmental Analysis.

    ERIC Educational Resources Information Center

    West, Donna M.; Sterling, Donna R.

    2001-01-01

    Laboratory activities for chemistry students focus attention on the use of acid-base chemistry to examine environmental conditions. After using standard laboratory procedures to analyze soil and rainwater samples, students use web-based resources to interpret their findings. Uses CBL probes and graphing calculators to gather and analyze data and…

  9. Recommended improvements to the DS02 dosimetry system's calculation of organ doses and their potential advantages for the Radiation Effects Research Foundation.

    PubMed

    Cullings, Harry M

    2012-03-01

    The Radiation Effects Research Foundation (RERF) uses a dosimetry system to calculate radiation doses received by the Japanese atomic bomb survivors based on their reported location and shielding at the time of exposure. The current system, DS02, completed in 2003, calculates detailed doses to 15 particular organs of the body from neutrons and gamma rays, using new source terms and transport calculations as well as some other improvements in the calculation of terrain and structural shielding, but continues to use methods from an older system, DS86, to account for body self-shielding. Although recent developments in models of the human body from medical imaging, along with contemporary computer speed and software, allow for improvement of the calculated organ doses, before undertaking changes to the organ dose calculations, it is important to evaluate the improvements that can be made and their potential contribution to RERF's research. The analysis provided here suggests that the most important improvements can be made by providing calculations for more organs or tissues and by providing a larger series of age- and sex-specific models of the human body from birth to adulthood, as well as fetal models.

  10. Active and Passive 3D Vector Radiative Transfer with Preferentially-Aligned Ice Particles

    NASA Technical Reports Server (NTRS)

    Adams, Ian S.; Munchak, Stephen J.; Pelissier, Craig S.; Kuo, Kwo-Sen; Heymsfield, Gerald M.

    2017-01-01

    For the purposes of interpreting active (radar) and passive (radiometer) microwave and millimeter wave remote sensing data, we have constructed a consistent radiative transfer modeling framework to simulate the responses for arbitrary sensors with differing sensing geometries and hardware configurations. As part of this work, we have implemented a recent method for calculating the electromagnetic properties of individual ice crystals and snow flakes. These calculations will allow us to exploit polarized remote sensing observations to discriminate different particles types and elucidate dynamics of cloud and precipitating systems.

  11. Temperature shift of intraband absorption peak in tunnel-coupled QW structure

    NASA Astrophysics Data System (ADS)

    Akimov, V.; Firsov, D. A.; Duque, C. A.; Tulupenko, V.; Balagula, R. M.; Vinnichenko, M. Ya.; Vorobjev, L. E.

    2017-04-01

    An experimental study of the intersubband light absorption by the 100-period GaAs/Al0.25Ga0.75As double quantum well heterostructure doped with silicon is reported and interpreted. Small temperature redshift of the 1-3 intersubband absorption peak is detected. Numerical calculations of the absorption coefficient including self-consistent Hartree calculations of the bottom of the conduction band show good agreement with the observed phenomena. The temperature dependence of energy gap of the material and the depolarization shift should be accounted for to explain the shift.

  12. Advancing Late Mesoproterozoic Paleogeography With New Constraints From The Keweenawan Rift And The Umkondo Large Igneous Province

    NASA Astrophysics Data System (ADS)

    Swanson-Hysell, N.; Kilian, T. M.; Bowring, S. A.; Hanson, R. E.; Burgess, S. D.; Ramezani, J.

    2014-12-01

    Laurentia and Kalahari are currently interpreted as independently moving continents ca. 1110 million years ago that subsequently became conjoined in the supercontinent Rodinia. Their relative positions and orientations are dependent both on the directional comparison of paleomagnetic poles and geomagnetic polarity choices for those poles. In this contribution, we use newly developed and existing paleomagnetic and geochronological data from both the ca. 1110-1085 Ma Midcontinent Rift of Laurentia and the ca. 1109 Ma Umkondo Large Igneous Province (LIP) of Kalahari to present improved constraints on relations between the two continents. Previous mean poles for the Umkondo LIP have been either calculated by taking the mean of regional submeans or at the site level which is problematic given the preponderance of multiple sites from single individual cooling units. We report a new Umkondo grand mean pole that is the mean of the virtual geomagnetic poles (VGPs) of individual cooling units and is reinforced with new data from ~20 previously unstudied Umkondo sills from Botswana. This approach yields a pole whose position and uncertainty are the most robust calculated to date. The portion of Laurentia's Mesoproterozoic apparent polar wander path (APWP) known as the Logan Loop and Keweenawan Track partially overlaps in age with the Umkondo pole and is of central importance in efforts to reconstruct late Mesoproterozoic paleogeography. Ongoing debates as to the geometry and timing of Rodinia assembly critically hinge on the comparison of paleomagnetic poles from other continents to the Keweenawan record. We present an updated compilation for the Keweenawan Track APWP using an improved chronostratigraphic context enabled by new geochronological and paleomagnetic data. Ongoing improvements and time-calibration of this record further constrains the rate of Laurentia's motion and provides opportunities for increased rigor in the determination of relative paleogeographic positions such as between Laurentia and Kalahari.

  13. Rates and Predictors of Professional Interpreting Provision for Patients With Limited English Proficiency in the Emergency Department and Inpatient Ward

    PubMed Central

    Ryan, Jennifer; Abbato, Samantha; Greer, Ristan; Vayne-Bossert, Petra; Good, Phillip

    2017-01-01

    The provision of professional interpreting services in the hospital setting decreases communication errors of clinical significance and improves clinical outcomes. A retrospective audit was conducted at a tertiary referral adult hospital in Brisbane, Australia. Of 20 563 admissions of patients presenting to the hospital emergency department (ED) and admitted to a ward during 2013-2014, 582 (2.8%) were identified as requiring interpreting services. In all, 19.8% of admissions were provided professional interpreting services in the ED, and 26.1% were provided on the ward. Patients were more likely to receive interpreting services in the ED if they were younger, spoke an Asian language, or used sign language. On the wards, using sign language was associated with 3 times odds of being provided an interpreter compared with other languages spoken. Characteristics of patients including their age and type of language spoken influence the clinician’s decision to engage a professional interpreter in both the ED and inpatient ward. PMID:29144184

  14. Biased Interpretation of Ambiguous Social Scenarios in Anorexia Nervosa.

    PubMed

    Cardi, Valentina; Turton, Robert; Schifano, Sylvia; Leppanen, Jenni; Hirsch, Colette R; Treasure, Janet

    2017-01-01

    Patients with anorexia nervosa experience increased sensitivity to the risk of social rejection. The aims of this study were to assess the interpretation of ambiguous social scenarios depicting the risk of rejection and to examine the relationship between interpretation biases and clinical symptoms. Thirty-five women with anorexia nervosa and 30 healthy eaters completed clinical questionnaires, alongside a sentence completion task. This task required participants to generate completions to ambiguous social scenarios and to endorse their best completion. Responses were rated as being negative, neutral or positive. Patients endorsed more negative interpretations and fewer neutral and positive interpretations compared with healthy eaters. The frequency of endorsed negative interpretations correlated with depression, anxiety and fear of weight gain and body disturbance. A negative interpretation bias towards social stimuli is present in women with anorexia nervosa and correlates with clinical symptoms. Interventions aimed at reducing this bias could improve illness prognosis. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. © 2016 John Wiley & Sons, Ltd and Eating Disorders Association.

  15. Bayesian model aggregation for ensemble-based estimates of protein pKa values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gosink, Luke J.; Hogan, Emilie A.; Pulsipher, Trenton C.

    2014-03-01

    This paper investigates an ensemble-based technique called Bayesian Model Averaging (BMA) to improve the performance of protein amino acid pmore » $$K_a$$ predictions. Structure-based p$$K_a$$ calculations play an important role in the mechanistic interpretation of protein structure and are also used to determine a wide range of protein properties. A diverse set of methods currently exist for p$$K_a$$ prediction, ranging from empirical statistical models to {\\it ab initio} quantum mechanical approaches. However, each of these methods are based on a set of assumptions that have inherent bias and sensitivities that can effect a model's accuracy and generalizability for p$$K_a$$ prediction in complicated biomolecular systems. We use BMA to combine eleven diverse prediction methods that each estimate pKa values of amino acids in staphylococcal nuclease. These methods are based on work conducted for the pKa Cooperative and the pKa measurements are based on experimental work conducted by the Garc{\\'i}a-Moreno lab. Our study demonstrates that the aggregated estimate obtained from BMA outperforms all individual prediction methods in our cross-validation study with improvements from 40-70\\% over other method classes. This work illustrates a new possible mechanism for improving the accuracy of p$$K_a$$ prediction and lays the foundation for future work on aggregate models that balance computational cost with prediction accuracy.« less

  16. LipidFrag: Improving reliability of in silico fragmentation of lipids and application to the Caenorhabditis elegans lipidome

    PubMed Central

    Neumann, Steffen; Schmitt-Kopplin, Philippe

    2017-01-01

    Lipid identification is a major bottleneck in high-throughput lipidomics studies. However, tools for the analysis of lipid tandem MS spectra are rather limited. While the comparison against spectra in reference libraries is one of the preferred methods, these libraries are far from being complete. In order to improve identification rates, the in silico fragmentation tool MetFrag was combined with Lipid Maps and lipid-class specific classifiers which calculate probabilities for lipid class assignments. The resulting LipidFrag workflow was trained and evaluated on different commercially available lipid standard materials, measured with data dependent UPLC-Q-ToF-MS/MS acquisition. The automatic analysis was compared against manual MS/MS spectra interpretation. With the lipid class specific models, identification of the true positives was improved especially for cases where candidate lipids from different lipid classes had similar MetFrag scores by removing up to 56% of false positive results. This LipidFrag approach was then applied to MS/MS spectra of lipid extracts of the nematode Caenorhabditis elegans. Fragments explained by LipidFrag match known fragmentation pathways, e.g., neutral losses of lipid headgroups and fatty acid side chain fragments. Based on prediction models trained on standard lipid materials, high probabilities for correct annotations were achieved, which makes LipidFrag a good choice for automated lipid data analysis and reliability testing of lipid identifications. PMID:28278196

  17. Planetary Migration and Kuiper Belt Dynamics

    NASA Astrophysics Data System (ADS)

    Malhotra, Renu

    The Kuiper belt holds memory of the dynamical processes that shaped the architecture of the solar system, including the orbital migration history of the giant planets. We propose studies of the orbital dynamics of the Kuiper Belt in order to understand the origin of its complex dynamical structure and its link to the orbital migration history of the giant planets. By means of numerical simulations, statistical tests, as well as analytical calculations we will (1) investigate the origin of resonant Kuiper belt objects to test alternative scenarios of Neptune's migration history, (2) investigate the long term dynamical evolution of the Haumea family of Kuiper Belt objects in order to improve the age estimate of this family, and (3) investigate resonance-sticking behavior and the Kozai-Lidov mechanism and its role in the origin of the extended scattered disk. These studies directly support the goals of the NASA-OSS program by improving our understanding of the origin of the solar system's architecture. Our results will provide constraints on the nature and timing of the dynamical excitation event that is thought to have occurred in early solar system history and to have determined the architecture of the present-day solar system; our results will also provide deeper theoretical understanding of sticky mean motion resonances which contribute greatly to the longevity of many small bodies, improve our understanding of dynamical transport of planetesimals in planetary systems, and help interpret observations of other planetary systems.

  18. Assessment of offshore New Jersey sources of Beach replenishment sand by diversified application of geologic and geophysical methods

    USGS Publications Warehouse

    Waldner, J.S.; Hall, D.W.; Uptegrove, J.; Sheridan, R.E.; Ashley, G.M.; Esker, D.

    1999-01-01

    Beach replenishment serves the dual purpose of maintaining a source of tourism and recreation while protecting life and property. For New Jersey, sources for beach sand supply are increasingly found offshore. To meet present and future needs, geologic and geophysical techniques can be used to improve the identification, volume estimation, and determination of suitability, thereby making the mining and managing of this resource more effective. Current research has improved both data collection and interpretation of seismic surveys and vibracore analysis for projects investigating sand ridges offshore of New Jersey. The New Jersey Geological Survey in cooperation with Rutgers University is evaluating the capabilities of digital seismic data (in addition to analog data) to analyze sand ridges. The printing density of analog systems limits the dynamic range to about 24 dB. Digital acquisition systems with dynamic ranges above 100 dB can permit enhanced seismic profiles by trace static correction, deconvolution, automatic gain scaling, horizontal stacking and digital filtering. Problems common to analog data, such as wave-motion effects of surface sources, water-bottom reverberation, and bubble-pulse-width can be addressed by processing. More than 160 line miles of digital high-resolution continuous profiling seismic data have been collected at sand ridges off Avalon, Beach Haven, and Barnegat Inlet. Digital multichannel data collection has recently been employed to map sand resources within the Port of New York/New Jersey expanded dredge-spoil site located 3 mi offshore of Sandy Hook, New Jersey. Multichannel data processing can reduce multiples, improve signal-to-noise calculations, enable source deconvolution, and generate sediment acoustic velocities and acoustic impedance analysis. Synthetic seismograms based on empirical relationships among grain size distribution, density, and velocity from vibracores are used to calculate proxy values for density and velocity. The seismograms are then correlated to the digital seismic profile to confirm reflected events. They are particularly useful where individual reflection events cannot be detected but a waveform generated by several thin lithologic units can be recognized. Progress in application of geologic and geophysical methods provides advantages in detailed sediment analysis and volumetric estimation of offshore sand ridges. New techniques for current and ongoing beach replenishment projects not only expand our knowledge of the geologic processes involved in sand ridge origin and development, but also improve our assessment of these valuable resources. These reconnaissance studies provide extensive data to the engineer regarding the suitability and quantity of sand and can optimize placement and analysis of vibracore samples.Beach replenishment serves the dual purpose of maintaining a source of tourism and recreation while protecting life and property. Research has improved both data collection and interpretation of seismic surveys and vibracore analysis for projects investigating sand ridges offshore of New Jersey. The New Jersey Geological Survey in cooperation with Rutgers University is evaluating the capabilities of digital seismic data to analyze sand ridges. The printing density of analog systems limits the dynamic range to about 24 dB. Digital acquisition systems with dynamic ranges about 100 dB can permit enhanced seismic profiles by trace static correction, deconvolution, automatic gain scaling, horizontal stacking and digital filtering.

  19. Figure Facts: Encouraging Undergraduates to Take a Data-Centered Approach to Reading Primary Literature

    PubMed Central

    Round, Jennifer E.; Campbell, A. Malcolm

    2013-01-01

    The ability to interpret experimental data is essential to understanding and participating in the process of scientific discovery. Reading primary research articles can be a frustrating experience for undergraduate biology students because they have very little experience interpreting data. To enhance their data interpretation skills, students used a template called “Figure Facts” to assist them with primary literature–based reading assignments in an advanced cellular neuroscience course. The Figure Facts template encourages students to adopt a data-centric approach, rather than a text-based approach, to understand research articles. Specifically, Figure Facts requires students to focus on the experimental data presented in each figure and identify specific conclusions that may be drawn from those results. Students who used Figure Facts for one semester increased the amount of time they spent examining figures in a primary research article, and regular exposure to primary literature was associated with improved student performance on a data interpretation skills test. Students reported decreased frustration associated with interpreting data figures, and their opinions of the Figure Facts template were overwhelmingly positive. In this paper, we present Figure Facts for others to adopt and adapt, with reflection on its implementation and effectiveness in improving undergraduate science education. PMID:23463227

  20. Supporting Fourth Graders' Ability to Interpret Graphs through Real-Time Graphing Technology: A Preliminary Study

    ERIC Educational Resources Information Center

    Deniz, Hasan; Dulger, Mehmet F.

    2012-01-01

    This study examined to what extent inquiry-based instruction supported with real-time graphing technology improves fourth grader's ability to interpret graphs as representations of physical science concepts such as motion and temperature. This study also examined whether there is any difference between inquiry-based instruction supported with…

Top