Slotnick, Scott D
2017-07-01
Analysis of functional magnetic resonance imaging (fMRI) data typically involves over one hundred thousand independent statistical tests; therefore, it is necessary to correct for multiple comparisons to control familywise error. In a recent paper, Eklund, Nichols, and Knutsson used resting-state fMRI data to evaluate commonly employed methods to correct for multiple comparisons and reported unacceptable rates of familywise error. Eklund et al.'s analysis was based on the assumption that resting-state fMRI data reflect null data; however, their 'null data' actually reflected default network activity that inflated familywise error. As such, Eklund et al.'s results provide no basis to question the validity of the thousands of published fMRI studies that have corrected for multiple comparisons or the commonly employed methods to correct for multiple comparisons.
Why We (Usually) Don't Have to Worry about Multiple Comparisons
ERIC Educational Resources Information Center
Gelman, Andrew; Hill, Jennifer; Yajima, Masanao
2012-01-01
Applied researchers often find themselves making statistical inferences in settings that would seem to require multiple comparisons adjustments. We challenge the Type I error paradigm that underlies these corrections. Moreover we posit that the problem of multiple comparisons can disappear entirely when viewed from a hierarchical Bayesian…
Han, Hyemin; Glenn, Andrea L
2018-06-01
In fMRI research, the goal of correcting for multiple comparisons is to identify areas of activity that reflect true effects, and thus would be expected to replicate in future studies. Finding an appropriate balance between trying to minimize false positives (Type I error) while not being too stringent and omitting true effects (Type II error) can be challenging. Furthermore, the advantages and disadvantages of these types of errors may differ for different areas of study. In many areas of social neuroscience that involve complex processes and considerable individual differences, such as the study of moral judgment, effects are typically smaller and statistical power weaker, leading to the suggestion that less stringent corrections that allow for more sensitivity may be beneficial and also result in more false positives. Using moral judgment fMRI data, we evaluated four commonly used methods for multiple comparison correction implemented in Statistical Parametric Mapping 12 by examining which method produced the most precise overlap with results from a meta-analysis of relevant studies and with results from nonparametric permutation analyses. We found that voxelwise thresholding with familywise error correction based on Random Field Theory provides a more precise overlap (i.e., without omitting too few regions or encompassing too many additional regions) than either clusterwise thresholding, Bonferroni correction, or false discovery rate correction methods.
Chen, Xiao; Lu, Bin; Yan, Chao-Gan
2018-01-01
Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Pick-N Multiple Choice-Exams: A Comparison of Scoring Algorithms
ERIC Educational Resources Information Center
Bauer, Daniel; Holzer, Matthias; Kopp, Veronika; Fischer, Martin R.
2011-01-01
To compare different scoring algorithms for Pick-N multiple correct answer multiple-choice (MC) exams regarding test reliability, student performance, total item discrimination and item difficulty. Data from six 3rd year medical students' end of term exams in internal medicine from 2005 to 2008 at Munich University were analysed (1,255 students,…
Interactive comparison and remediation of collections of macromolecular structures.
Moriarty, Nigel W; Liebschner, Dorothee; Klei, Herbert E; Echols, Nathaniel; Afonine, Pavel V; Headd, Jeffrey J; Poon, Billy K; Adams, Paul D
2018-01-01
Often similar structures need to be compared to reveal local differences throughout the entire model or between related copies within the model. Therefore, a program to compare multiple structures and enable correction any differences not supported by the density map was written within the Phenix framework (Adams et al., Acta Cryst 2010; D66:213-221). This program, called Structure Comparison, can also be used for structures with multiple copies of the same protein chain in the asymmetric unit, that is, as a result of non-crystallographic symmetry (NCS). Structure Comparison was designed to interface with Coot(Emsley et al., Acta Cryst 2010; D66:486-501) and PyMOL(DeLano, PyMOL 0.99; 2002) to facilitate comparison of large numbers of related structures. Structure Comparison analyzes collections of protein structures using several metrics, such as the rotamer conformation of equivalent residues, displays the results in tabular form and allows superimposed protein chains and density maps to be quickly inspected and edited (via the tools in Coot) for consistency, completeness and correctness. © 2017 The Protein Society.
Atypical nucleus accumbens morphology in psychopathy: another limbic piece in the puzzle.
Boccardi, Marina; Bocchetta, Martina; Aronen, Hannu J; Repo-Tiihonen, Eila; Vaurio, Olli; Thompson, Paul M; Tiihonen, Jari; Frisoni, Giovanni B
2013-01-01
Psychopathy has been associated with increased putamen and striatum volumes. The nucleus accumbens - a key structure in reversal learning, less effective in psychopathy - has not yet received specific attention. Moreover, basal ganglia morphology has never been explored. We examined the morphology of the caudate, putamen and accumbens, manually segmented from magnetic resonance images of 26 offenders (age: 32.5 ± 8.4) with medium-high psychopathy (mean PCL-R=30 ± 5) and 25 healthy controls (age: 34.6 ± 10.8). Local differences were statistically modeled using a surface-based radial distance mapping method (p<0.05; multiple comparisons correction through permutation tests). In psychopathy, the caudate and putamen had normal global volume, but different morphology, significant after correction for multiple comparisons, for the right dorsal putamen (permutation test: p=0.02). The volume of the nucleus accumbens was 13% smaller in psychopathy (p corrected for multiple comparisons <0.006). The atypical morphology consisted of predominant anterior hypotrophy bilaterally (10-30%). Caudate and putamen local morphology displayed negative correlation with the lifestyle factor of the PCL-R (permutation test: p=0.05 and 0.03). From these data, psychopathy appears to be associated with an atypical striatal morphology, with highly significant global and local differences of the accumbens. This is consistent with the clinical syndrome and with theories of limbic involvement. Copyright © 2013 Elsevier Ltd. All rights reserved.
Effect of Air Pollution on Exacerbations of Bronchiectasis in Badalona, Spain, 2008-2016.
Garcia-Olivé, Ignasi; Stojanovic, Zoran; Radua, Joaquim; Rodriguez-Pons, Laura; Martinez-Rivera, Carlos; Ruiz Manzano, Juan
2018-05-17
Air pollution has been widely associated with respiratory diseases. Nevertheless, the association between air pollution and exacerbations of bronchiectasis has been less studied. To analyze the effect of air pollution on exacerbations of bronchiectasis. This was a retrospective observational study conducted in Badalona. The number of daily hospital admissions and emergency room visits related to exacerbation of bronchiectasis (ICD-9 code 494.1) between 2008 and 2016 was obtained. We used simple Poisson regressions to test the effects of daily mean temperature, SO2, NO2, CO, and PM10 levels on bronchiectasis-related emergencies and hospitalizations on the same day and 1-4 days after. All p values were corrected for multiple comparisons. SO2 was significantly associated with an increase in the number of hospitalizations (lags 0, 1, 2, and 3). None of these associations remained significant after correcting for multiple comparisons. The number of emergency room visits was associated with higher levels of SO2 (lags 0-4). After correcting for multiple comparisons, the association between emergency room visits and SO2 levels was statistically significant for lag 0 (p = 0.043), lag 1 (p = 0.018), and lag 3 (p = 0.050). The number of emergency room visits for exacerbation of bronchiectasis is associated with higher levels of SO2. © 2018 S. Karger AG, Basel.
Alonso, Joan Francesc; Romero, Sergio; Mañanas, Miguel Ángel; Rojas, Mónica; Riba, Jordi; Barbanoj, Manel José
2015-10-01
The identification of the brain regions involved in the neuropharmacological action is a potential procedure for drug development. These regions are commonly determined by the voxels showing significant statistical differences after comparing placebo-induced effects with drug-elicited effects. LORETA is an electroencephalography (EEG) source imaging technique frequently used to identify brain structures affected by the drug. The aim of the present study was to evaluate different methods for the correction of multiple comparisons in the LORETA maps. These methods which have been commonly used in neuroimaging and also simulated studies have been applied on a real case of pharmaco-EEG study where the effects of increasing benzodiazepine doses on the central nervous system measured by LORETA were investigated. Data consisted of EEG recordings obtained from nine volunteers who received single oral doses of alprazolam 0.25, 0.5, and 1 mg, and placebo in a randomized crossover double-blind design. The identification of active regions was highly dependent on the selected multiple test correction procedure. The combined criteria approach known as cluster mass was useful to reveal that increasing drug doses led to higher intensity and spread of the pharmacologically induced changes in intracerebral current density.
Benseñor, Isabela M; Nunes, Maria Angélica; Sander Diniz, Maria de Fátima; Santos, Itamar S; Brunoni, André R; Lotufo, Paulo A
2016-02-01
To evaluate the association between subclinical thyroid dysfunction and psychiatric disorders using baseline data from the Brazilian Longitudinal Study of Adult Health (ELSA-Brasil). Cross-sectional study. The study included 12 437 participants from the ELSA-Brasil with normal thyroid function (92·8%), 193 (1·4%) with subclinical hyperthyroidism and 784 (5·8%) with subclinical hypothyroidism, totalling 13 414 participants (50·6% of women). The mental health diagnoses of participants were assessed by trained raters using the Clinical Interview Schedule - Revised (CIS-R) and grouped according to the International Classification of Diseases 10 (ICD-10). Thyroid dysfunction was assessed using TSH and FT4 as well as routine use of thyroid hormones or antithyroid medications. Logistic models were presented using psychiatric disorders as the dependent variable and subclinical thyroid disorders as the independent variable. All logistic models were corrected for multiple comparisons using Bonferroni correction. After multivariate adjustment for possible confounders, we found a direct association between subclinical hyperthyroidism and panic disorder odds ratio [OR], 2·55; 95% confidence Interval (95% CI), 1·09-5·94; and an inverse association between subclinical hypothyroidism and generalized anxiety disorder (OR, 0·75; 95% CI, 0·59-0·96). However, both lost significance after correction for multiple comparisons. Subclinical hyperthyroidism was positively associated with panic disorder and negatively associated with anxiety disorder, although not significant after adjustment for multiple comparisons. © 2015 John Wiley & Sons Ltd.
Pietrosemoli, Natalia; Mella, Sébastien; Yennek, Siham; Baghdadi, Meryem B; Sakai, Hiroshi; Sambasivan, Ramkumar; Pala, Francesca; Di Girolamo, Daniela; Tajbakhsh, Shahragim
2018-06-06
After publication of this article [1], the authors noted that the legends for supplementary files Figures S3 and S4 were truncated in the production process, therefore lacking some information concerning these Figures. The complete legends are included in this Correction. The authors apologize for any inconvenience that this might have caused.
Mass univariate analysis of event-related brain potentials/fields I: a critical tutorial review.
Groppe, David M; Urbach, Thomas P; Kutas, Marta
2011-12-01
Event-related potentials (ERPs) and magnetic fields (ERFs) are typically analyzed via ANOVAs on mean activity in a priori windows. Advances in computing power and statistics have produced an alternative, mass univariate analyses consisting of thousands of statistical tests and powerful corrections for multiple comparisons. Such analyses are most useful when one has little a priori knowledge of effect locations or latencies, and for delineating effect boundaries. Mass univariate analyses complement and, at times, obviate traditional analyses. Here we review this approach as applied to ERP/ERF data and four methods for multiple comparison correction: strong control of the familywise error rate (FWER) via permutation tests, weak control of FWER via cluster-based permutation tests, false discovery rate control, and control of the generalized FWER. We end with recommendations for their use and introduce free MATLAB software for their implementation. Copyright © 2011 Society for Psychophysiological Research.
Ward, John; Sorrels, Ken; Coats, Jesse; Pourmoghaddam, Amir; Deleon, Carlos; Daigneault, Paige
2014-03-01
The purpose of this study was to pilot test our study procedures and estimate parameters for sample size calculations for a randomized controlled trial to determine if bilateral sacroiliac (SI) joint manipulation affects specific gait parameters in asymptomatic individuals with a leg length inequality (LLI). Twenty-one asymptomatic chiropractic students engaged in a baseline 90-second walking kinematic analysis using infrared Vicon® cameras. Following this, participants underwent a functional LLI test. Upon examination participants were classified as: left short leg, right short leg, or no short leg. Half of the participants in each short leg group were then randomized to receive bilateral corrective SI joint chiropractic manipulative therapy (CMT). All participants then underwent another 90-second gait analysis. Pre- versus post-intervention gait data were then analyzed within treatment groups by an individual who was blinded to participant group status. For the primary analysis, all p-values were corrected for multiple comparisons using the Bonferroni method. Within groups, no differences in measured gait parameters were statistically significant after correcting for multiple comparisons. The protocol of this study was acceptable to all subjects who were invited to participate. No participants refused randomization. Based on the data collected, we estimated that a larger main study would require 34 participants in each comparison group to detect a moderate effect size.
Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo
2005-10-01
An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.
Implementation of false discovery rate for exploring novel paradigms and trait dimensions with ERPs.
Crowley, Michael J; Wu, Jia; McCreary, Scott; Miller, Kelly; Mayes, Linda C
2012-01-01
False discovery rate (FDR) is a multiple comparison procedure that targets the expected proportion of false discoveries among the discoveries. Employing FDR methods in event-related potential (ERP) research provides an approach to explore new ERP paradigms and ERP-psychological trait/behavior relations. In Study 1, we examined neural responses to escape behavior from an aversive noise. In Study 2, we correlated a relatively unexplored trait dimension, ostracism, with neural response. In both situations we focused on the frontal cortical region, applying a channel by time plots to display statistically significant uncorrected data and FDR corrected data, controlling for multiple comparisons.
Published GMO studies find no evidence of harm when corrected for multiple comparisons.
Panchin, Alexander Y; Tuzhikov, Alexander I
2017-03-01
A number of widely debated research articles claiming possible technology-related health concerns have influenced the public opinion on genetically modified food safety. We performed a statistical reanalysis and review of experimental data presented in some of these studies and found that quite often in contradiction with the authors' conclusions the data actually provides weak evidence of harm that cannot be differentiated from chance. In our opinion the problem of statistically unaccounted multiple comparisons has led to some of the most cited anti-genetically modified organism health claims in history. We hope this analysis puts the original results of these studies into proper context.
Non-parametric combination and related permutation tests for neuroimaging.
Winkler, Anderson M; Webster, Matthew A; Brooks, Jonathan C; Tracey, Irene; Smith, Stephen M; Nichols, Thomas E
2016-04-01
In this work, we show how permutation methods can be applied to combination analyses such as those that include multiple imaging modalities, multiple data acquisitions of the same modality, or simply multiple hypotheses on the same data. Using the well-known definition of union-intersection tests and closed testing procedures, we use synchronized permutations to correct for such multiplicity of tests, allowing flexibility to integrate imaging data with different spatial resolutions, surface and/or volume-based representations of the brain, including non-imaging data. For the problem of joint inference, we propose and evaluate a modification of the recently introduced non-parametric combination (NPC) methodology, such that instead of a two-phase algorithm and large data storage requirements, the inference can be performed in a single phase, with reasonable computational demands. The method compares favorably to classical multivariate tests (such as MANCOVA), even when the latter is assessed using permutations. We also evaluate, in the context of permutation tests, various combining methods that have been proposed in the past decades, and identify those that provide the best control over error rate and power across a range of situations. We show that one of these, the method of Tippett, provides a link between correction for the multiplicity of tests and their combination. Finally, we discuss how the correction can solve certain problems of multiple comparisons in one-way ANOVA designs, and how the combination is distinguished from conjunctions, even though both can be assessed using permutation tests. We also provide a common algorithm that accommodates combination and correction. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Detecting and removing multiplicative spatial bias in high-throughput screening technologies.
Caraus, Iurie; Mazoure, Bogdan; Nadon, Robert; Makarenkov, Vladimir
2017-10-15
Considerable attention has been paid recently to improve data quality in high-throughput screening (HTS) and high-content screening (HCS) technologies widely used in drug development and chemical toxicity research. However, several environmentally- and procedurally-induced spatial biases in experimental HTS and HCS screens decrease measurement accuracy, leading to increased numbers of false positives and false negatives in hit selection. Although effective bias correction methods and software have been developed over the past decades, almost all of these tools have been designed to reduce the effect of additive bias only. Here, we address the case of multiplicative spatial bias. We introduce three new statistical methods meant to reduce multiplicative spatial bias in screening technologies. We assess the performance of the methods with synthetic and real data affected by multiplicative spatial bias, including comparisons with current bias correction methods. We also describe a wider data correction protocol that integrates methods for removing both assay and plate-specific spatial biases, which can be either additive or multiplicative. The methods for removing multiplicative spatial bias and the data correction protocol are effective in detecting and cleaning experimental data generated by screening technologies. As our protocol is of a general nature, it can be used by researchers analyzing current or next-generation high-throughput screens. The AssayCorrector program, implemented in R, is available on CRAN. makarenkov.vladimir@uqam.ca. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
A Comparison of Three Tests of Mediation
ERIC Educational Resources Information Center
Warbasse, Rosalia E.
2009-01-01
A simulation study was conducted to evaluate the performance of three tests of mediation: the bias-corrected and accelerated bootstrap (Efron & Tibshirani, 1993), the asymmetric confidence limits test (MacKinnon, 2008), and a multiple regression approach described by Kenny, Kashy, and Bolger (1998). The evolution of these methods is reviewed and…
NASA Astrophysics Data System (ADS)
Gold, A. U.; Harris, S. E.
2013-12-01
The greenhouse effect comes up in most discussions about climate and is a key concept related to climate change. Existing studies have shown that students and adults alike lack a detailed understanding of this important concept or might hold misconceptions. We studied the effectiveness of different interventions on University-level students' understanding of the greenhouse effect. Introductory level science students were tested for their pre-knowledge of the greenhouse effect using validated multiple-choice questions, short answers and concept sketches. All students participated in a common lesson about the greenhouse effect and were then randomly assigned to one of two lab groups. One group explored an existing simulation about the greenhouse effect (PhET-lesson) and the other group worked with absorption spectra of different greenhouse gases (Data-lesson) to deepen the understanding of the greenhouse effect. All students completed the same assessment including multiple choice, short answers and concept sketches after participation in their lab lesson. 164 students completed all the assessments, 76 completed the PhET lesson and 77 completed the data lesson. 11 students missed the contrasting lesson. In this presentation we show the comparison between the multiple-choice questions, short answer questions and the concept sketches of students. We explore how well each of these assessment types represents student's knowledge. We also identify items that are indicators of the level of understanding of the greenhouse effect as measured in correspondence of student answers to an expert mental model and expert responses. Preliminary data analysis shows that student who produce concept sketch drawings that come close to expert drawings also choose correct multiple-choice answers. However, correct multiple-choice answers are not necessarily an indicator that a student produces an expert-like correlating concept sketch items. Multiple-choice questions that require detailed knowledge of the greenhouse effect (e.g. direction of re-emission of infrared energy from greenhouse gas) are significantly more likely to be answered correctly by students who also produce expert-like concept sketch items than by students who don't include this aspect in their sketch and don't answer the multiple choice questions correctly. This difference is not as apparent for less technical multiple-choice questions (e.g. type of radiation emitted by Sun). Our findings explore the formation of student's mental models throughout different interventions and how well the different assessment techniques used in this study represent the student understanding of the overall concept.
Voxelwise multivariate analysis of multimodality magnetic resonance imaging
Naylor, Melissa G.; Cardenas, Valerie A.; Tosun, Duygu; Schuff, Norbert; Weiner, Michael; Schwartzman, Armin
2015-01-01
Most brain magnetic resonance imaging (MRI) studies concentrate on a single MRI contrast or modality, frequently structural MRI. By performing an integrated analysis of several modalities, such as structural, perfusion-weighted, and diffusion-weighted MRI, new insights may be attained to better understand the underlying processes of brain diseases. We compare two voxelwise approaches: (1) fitting multiple univariate models, one for each outcome and then adjusting for multiple comparisons among the outcomes and (2) fitting a multivariate model. In both cases, adjustment for multiple comparisons is performed over all voxels jointly to account for the search over the brain. The multivariate model is able to account for the multiple comparisons over outcomes without assuming independence because the covariance structure between modalities is estimated. Simulations show that the multivariate approach is more powerful when the outcomes are correlated and, even when the outcomes are independent, the multivariate approach is just as powerful or more powerful when at least two outcomes are dependent on predictors in the model. However, multiple univariate regressions with Bonferroni correction remains a desirable alternative in some circumstances. To illustrate the power of each approach, we analyze a case control study of Alzheimer's disease, in which data from three MRI modalities are available. PMID:23408378
Hubbard, Joanna K.; Potts, Macy A.; Couch, Brian A.
2017-01-01
Assessments represent an important component of undergraduate courses because they affect how students interact with course content and gauge student achievement of course objectives. To make decisions on assessment design, instructors must understand the affordances and limitations of available question formats. Here, we use a crossover experimental design to identify differences in how multiple-true-false (MTF) and free-response (FR) exam questions reveal student thinking regarding specific conceptions. We report that correct response rates correlate across the two formats but that a higher percentage of students provide correct responses for MTF questions. We find that MTF questions reveal a high prevalence of students with mixed (correct and incorrect) conceptions, while FR questions reveal a high prevalence of students with partial (correct and unclear) conceptions. These results suggest that MTF question prompts can direct students to address specific conceptions but obscure nuances in student thinking and may overestimate the frequency of particular conceptions. Conversely, FR questions provide a more authentic portrait of student thinking but may face limitations in their ability to diagnose specific, particularly incorrect, conceptions. We further discuss an intrinsic tension between question structure and diagnostic capacity and how instructors might use multiple formats or hybrid formats to overcome these obstacles. PMID:28450446
Non‐parametric combination and related permutation tests for neuroimaging
Webster, Matthew A.; Brooks, Jonathan C.; Tracey, Irene; Smith, Stephen M.; Nichols, Thomas E.
2016-01-01
Abstract In this work, we show how permutation methods can be applied to combination analyses such as those that include multiple imaging modalities, multiple data acquisitions of the same modality, or simply multiple hypotheses on the same data. Using the well‐known definition of union‐intersection tests and closed testing procedures, we use synchronized permutations to correct for such multiplicity of tests, allowing flexibility to integrate imaging data with different spatial resolutions, surface and/or volume‐based representations of the brain, including non‐imaging data. For the problem of joint inference, we propose and evaluate a modification of the recently introduced non‐parametric combination (NPC) methodology, such that instead of a two‐phase algorithm and large data storage requirements, the inference can be performed in a single phase, with reasonable computational demands. The method compares favorably to classical multivariate tests (such as MANCOVA), even when the latter is assessed using permutations. We also evaluate, in the context of permutation tests, various combining methods that have been proposed in the past decades, and identify those that provide the best control over error rate and power across a range of situations. We show that one of these, the method of Tippett, provides a link between correction for the multiplicity of tests and their combination. Finally, we discuss how the correction can solve certain problems of multiple comparisons in one‐way ANOVA designs, and how the combination is distinguished from conjunctions, even though both can be assessed using permutation tests. We also provide a common algorithm that accommodates combination and correction. Hum Brain Mapp 37:1486‐1511, 2016. © 2016 Wiley Periodicals, Inc. PMID:26848101
Spaceborne lidar for cloud monitoring
NASA Astrophysics Data System (ADS)
Werner, Christian; Krichbaumer, W.; Matvienko, Gennadii G.
1994-12-01
Results of laser cloud top measurements taken from space in 1982 (called PANTHER) are presented. Three sequences of land, water, and cloud data are selected. A comparison with airborne lidar data shows similarities. Using the single scattering lidar equation for these spaceborne lidar measurements one can misinterpret the data if one doesn't correct for multiple scattering.
NASA Astrophysics Data System (ADS)
Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.
2009-10-01
Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.
Williams, G E; Cuvo, A J
1986-01-01
The research was designed to validate procedures to teach apartment upkeep skills to severely handicapped clients with various categorical disabilities. Methodological features of this research included performance comparisons between general and specific task analyses, effect of an impasse correction baseline procedure, social validation of training goals, natural environment assessments and contingencies, as well as long-term follow-up. Subjects were taught to perform upkeep responses on their air conditioner-heating unit, electric range, refrigerator, and electrical appliances within the context of a multiple-probe across subjects experimental design. The results showed acquisition, long-term maintenance, and generalization of the upkeep skills to a nontraining apartment. General task analyses were recommended for assessment and specific task analyses for training. The impasse correction procedure generally did not produce acquisition. PMID:3710947
Limb Correction of Individual Infrared Channels Used in RGB Composite Products
NASA Technical Reports Server (NTRS)
Elmer, Nicholas J.; Berndt, Emily; Jedlovec, Gary J.; Lafontaine, Frank J.
2015-01-01
This study demonstrates that limb-cooling can be removed from infrared imagery using latitudinally and seasonally dependent limb correction coefficients, which account for an increasing optical path length as scan angle increases. Furthermore, limb-corrected RGB composites provide multiple advantages over uncorrected RGB composites, including increased confidence in the interpretation of RGB features, improved situation awareness for operational forecasters, seamless transition between overlaid RGB composites, easy comparison of RGB products from different sensors, and the availability of high quality proxy products for the GOES-R era, as demonstrated by the case examples presented in Section 3. This limb correction methodology can also be applied to additional infrared channels used to create other RGB products, including those created from other satellite sensors, such as Suomi NPP Visible Infrared Imaging Radiometer Suite (VIIRS).
NASA Astrophysics Data System (ADS)
Jaradat, H. M.; Syam, Muhammed; Jaradat, M. M. M.; Mustafa, Zead; Moman, S.
2018-03-01
In this paper, we investigate the multiple soliton solutions and multiple singular soliton solutions of a class of the fifth order nonlinear evolution equation with variable coefficients of t using the simplified bilinear method based on a transformation method combined with the Hirota's bilinear sense. In addition, we present analysis for some parameters such as the soliton amplitude and the characteristic line. Several equation in the literature are special cases of the class which we discuss such as Caudrey-Dodd-Gibbon equation and Sawada-Kotera. Comparison with several methods in the literature, such as Helmholtz solution of the inverse variational problem, rational exponential function method, tanh method, homotopy perturbation method, exp-function method, and coth method, are made. From these comparisons, we conclude that the proposed method is efficient and our solutions are correct. It is worth mention that the proposed solution can solve many physical problems.
Voxelwise multivariate analysis of multimodality magnetic resonance imaging.
Naylor, Melissa G; Cardenas, Valerie A; Tosun, Duygu; Schuff, Norbert; Weiner, Michael; Schwartzman, Armin
2014-03-01
Most brain magnetic resonance imaging (MRI) studies concentrate on a single MRI contrast or modality, frequently structural MRI. By performing an integrated analysis of several modalities, such as structural, perfusion-weighted, and diffusion-weighted MRI, new insights may be attained to better understand the underlying processes of brain diseases. We compare two voxelwise approaches: (1) fitting multiple univariate models, one for each outcome and then adjusting for multiple comparisons among the outcomes and (2) fitting a multivariate model. In both cases, adjustment for multiple comparisons is performed over all voxels jointly to account for the search over the brain. The multivariate model is able to account for the multiple comparisons over outcomes without assuming independence because the covariance structure between modalities is estimated. Simulations show that the multivariate approach is more powerful when the outcomes are correlated and, even when the outcomes are independent, the multivariate approach is just as powerful or more powerful when at least two outcomes are dependent on predictors in the model. However, multiple univariate regressions with Bonferroni correction remain a desirable alternative in some circumstances. To illustrate the power of each approach, we analyze a case control study of Alzheimer's disease, in which data from three MRI modalities are available. Copyright © 2013 Wiley Periodicals, Inc.
Kaku, Yoshio; Ookawara, Susumu; Miyazawa, Haruhisa; Ito, Kiyonori; Ueda, Yuichirou; Hirai, Keiji; Hoshino, Taro; Mori, Honami; Yoshida, Izumi; Morishita, Yoshiyuki; Tabei, Kaoru
2016-02-01
The following conventional calcium correction formula (Payne) is broadly applied for serum calcium estimation: corrected total calcium (TCa) (mg/dL) = TCa (mg/dL) + (4 - albumin (g/dL)); however, it is inapplicable to chronic kidney disease (CKD) patients. A total of 2503 venous samples were collected from 942 all-stage CKD patients, and levels of TCa (mg/dL), ionized calcium ([iCa(2+) ] mmol/L), phosphate (mg/dL), albumin (g/dL), and pH, and other clinical parameters were measured. We assumed corrected TCa (the gold standard) to be equal to eight times the iCa(2+) value (measured corrected TCa). Then, we performed stepwise multiple linear regression analysis by using the clinical parameters and derived a simple formula for corrected TCa approximation. The following formula was devised from multiple linear regression analysis: Approximated corrected TCa (mg/dL) = TCa + 0.25 × (4 - albumin) + 4 × (7.4 - p H) + 0.1 × (6 - phosphate) + 0.3. Receiver operating characteristic curves analysis illustrated that area under the curve of approximated corrected TCa for detection of measured corrected TCa ≥ 8.4 mg/dL and ≤ 10.4 mg/dL were 0.994 and 0.919, respectively. The intraclass correlation coefficient demonstrated superior agreement using this new formula compared to other formulas (new formula: 0.826, Payne: 0.537, Jain: 0.312, Portale: 0.582, Ferrari: 0.362). In CKD patients, TCa correction should include not only albumin but also pH and phosphate. The approximated corrected TCa from this formula demonstrates superior agreement with the measured corrected TCa in comparison to other formulas. © 2016 International Society for Apheresis, Japanese Society for Apheresis, and Japanese Society for Dialysis Therapy.
Comparison of two stand-alone CADe systems at multiple operating points
NASA Astrophysics Data System (ADS)
Sahiner, Berkman; Chen, Weijie; Pezeshk, Aria; Petrick, Nicholas
2015-03-01
Computer-aided detection (CADe) systems are typically designed to work at a given operating point: The device displays a mark if and only if the level of suspiciousness of a region of interest is above a fixed threshold. To compare the standalone performances of two systems, one approach is to select the parameters of the systems to yield a target false-positive rate that defines the operating point, and to compare the sensitivities at that operating point. Increasingly, CADe developers offer multiple operating points, which necessitates the comparison of two CADe systems involving multiple comparisons. To control the Type I error, multiple-comparison correction is needed for keeping the family-wise error rate (FWER) less than a given alpha-level. The sensitivities of a single modality at different operating points are correlated. In addition, the sensitivities of the two modalities at the same or different operating points are also likely to be correlated. It has been shown in the literature that when test statistics are correlated, well-known methods for controlling the FWER are conservative. In this study, we compared the FWER and power of three methods, namely the Bonferroni, step-up, and adjusted step-up methods in comparing the sensitivities of two CADe systems at multiple operating points, where the adjusted step-up method uses the estimated correlations. Our results indicate that the adjusted step-up method has a substantial advantage over other the two methods both in terms of the FWER and power.
Toye, Warren; Das, Ram; Kron, Tomas; Franich, Rick; Johnston, Peter; Duchesne, Gillian
2009-05-01
To develop an in vivo dosimetry based investigative action level relevant for a corrective protocol for HDR brachytherapy boost treatment. The dose delivered to points within the urethra and rectum was measured using TLD in vivo dosimetry in 56 patients. Comparisons between the urethral and rectal measurements and TPS calculations showed differences, which are related to the relative position of the implant and TLD trains, and allowed shifts of implant position relative to the prostate to be estimated. Analysis of rectal dose measurements is consistent with implant movement, which was previously only identified with the urethral data. Shift corrected doses were compared with results from the TPS. Comparison of peak doses to the urethra and rectum has been assessed against the proposed corrective protocol to limit overdosing these critical structures. An initial investigative level of 20% difference between measured and TPS peak dose was established, which corresponds to 1/3 of patients which was practical for the caseload. These patients were assessed resulting in corrective action being applied for one patient. Multiple triggering for selective investigative action is outlined. The use of a single in vivo measurement in the first fraction optimizes patient benefit at acceptable cost.
Scatter characterization and correction for simultaneous multiple small-animal PET imaging.
Prasad, Rameshwar; Zaidi, Habib
2014-04-01
The rapid growth and usage of small-animal positron emission tomography (PET) in molecular imaging research has led to increased demand on PET scanner's time. One potential solution to increase throughput is to scan multiple rodents simultaneously. However, this is achieved at the expense of deterioration of image quality and loss of quantitative accuracy owing to enhanced effects of photon attenuation and Compton scattering. The purpose of this work is, first, to characterize the magnitude and spatial distribution of the scatter component in small-animal PET imaging when scanning single and multiple rodents simultaneously and, second, to assess the relevance and evaluate the performance of scatter correction under similar conditions. The LabPET™-8 scanner was modelled as realistically as possible using Geant4 Application for Tomographic Emission Monte Carlo simulation platform. Monte Carlo simulations allow the separation of unscattered and scattered coincidences and as such enable detailed assessment of the scatter component and its origin. Simple shape-based and more realistic voxel-based phantoms were used to simulate single and multiple PET imaging studies. The modelled scatter component using the single-scatter simulation technique was compared to Monte Carlo simulation results. PET images were also corrected for attenuation and the combined effect of attenuation and scatter on single and multiple small-animal PET imaging evaluated in terms of image quality and quantitative accuracy. A good agreement was observed between calculated and Monte Carlo simulated scatter profiles for single- and multiple-subject imaging. In the LabPET™-8 scanner, the detector covering material (kovar) contributed the maximum amount of scatter events while the scatter contribution due to lead shielding is negligible. The out-of field-of-view (FOV) scatter fraction (SF) is 1.70, 0.76, and 0.11% for lower energy thresholds of 250, 350, and 400 keV, respectively. The increase in SF ranged between 25 and 64% when imaging multiple subjects (three to five) of different size simultaneously in comparison to imaging a single subject. The spill-over ratio (SOR) increases with increasing the number of subjects in the FOV. Scatter correction improved the SOR for both water and air cold compartments of single and multiple imaging studies. The recovery coefficients for different body parts of the mouse whole-body and rat whole-body anatomical models were improved for multiple imaging studies following scatter correction. The magnitude and spatial distribution of the scatter component in small-animal PET imaging of single and multiple subjects simultaneously were characterized, and its impact was evaluated in different situations. Scatter correction improves PET image quality and quantitative accuracy for single rat and simultaneous multiple mice and rat imaging studies, whereas its impact is insignificant in single mouse imaging.
A comparison of quality of present-day heat flow obtained from BHTs, Horner Plots of Malay Basin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waples, D.W.; Mahadir, R.
1994-07-01
Reconciling temperature data obtained from measurement of single BHT, multiple BHT at a single depth, RFTs, and DSTs, is very difficult. Quality of data varied widely, however DST data were assumed to be most reliable. Data from 87 wells was used in this study, but only 47 wells have DST data. BASINMOD program was used to calculate the present-day heat flow, using measured thermal conductivity and calibrated against the DST data. The heat flows obtained from the DST data were assumed to be correct and representative throughout the basin. Then, heat flows using (1) uncorrected RFT data, (2) multiple BHTmore » data corrected by the Horner plot method, and (3) single BHT values corrected upward by a standard 10% were calculated. All of these three heat-flow populations had identically standard deviations to that for the DST data, but with significantly lower mean values. Correction factors were calculated to give each of the three erroneous populations the same mean value as the DST population. Heat flows calculated from RFT data had to be corrected upward by a factor of 1.12 to be equivalent to DST data; Horner plot data corrected by a factor of 1.18, and single BHT data by a factor of 1.2. These results suggest that present-day subsurface temperatures using RFT, Horner plot, and BHT data are considerably lower than they should be. The authors suspect qualitatively similar results would be found in other areas. Hence, they recommend significant corrections be routinely made until local calibration factors are established.« less
Extrastriatal dopamine D2-receptor availability in social anxiety disorder.
Plavén-Sigray, Pontus; Hedman, Erik; Victorsson, Pauliina; Matheson, Granville J; Forsberg, Anton; Djurfeldt, Diana R; Rück, Christian; Halldin, Christer; Lindefors, Nils; Cervenka, Simon
2017-05-01
Alterations in the dopamine system are hypothesized to influence the expression of social anxiety disorder (SAD) symptoms. However, molecular imaging studies comparing dopamine function between patients and control subjects have yielded conflicting results. Importantly, while all previous investigations focused on the striatum, findings from activation and blood flow studies indicate that prefrontal and limbic brain regions have a central role in the pathophysiology. The objective of this study was to investigate extrastriatal dopamine D2-receptor (D2-R) availability in SAD. We examined 12 SAD patients and 16 healthy controls using positron emission tomography and the high-affinity D2-R radioligand [ 11 C]FLB457. Parametric images of D2-R binding potential were derived using the Logan graphical method with cerebellum as reference region. Two-tailed one-way independent ANCOVAs, with age as covariate, were used to examine differences in D2-R availability between groups using both region-based and voxel-wise analyses. The region-based analysis showed a medium effect size of higher D2-R levels in the orbitofrontal cortex (OFC) in patients, although this result did not remain significant after correction for multiple comparisons. The voxel-wise comparison revealed elevated D2-R availability in patients within OFC and right dorsolateral prefrontal cortex after correction for multiple comparisons. These preliminary results suggest that an aberrant extrastriatal dopamine system may be part of the disease mechanism in SAD. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Wang, Menghua
2003-01-01
The primary focus of this proposed research is for the atmospheric correction algorithm evaluation and development and satellite sensor calibration and characterization. It is well known that the atmospheric correction, which removes more than 90% of sensor-measured signals contributed from atmosphere in the visible, is the key procedure in the ocean color remote sensing (Gordon and Wang, 1994). The accuracy and effectiveness of the atmospheric correction directly affect the remotely retrieved ocean bio-optical products. On the other hand, for ocean color remote sensing, in order to obtain the required accuracy in the derived water-leaving signals from satellite measurements, an on-orbit vicarious calibration of the whole system, i.e., sensor and algorithms, is necessary. In addition, it is important to address issues of (i) cross-calibration of two or more sensors and (ii) in-orbit vicarious calibration of the sensor-atmosphere system. The goal of these researches is to develop methods for meaningful comparison and possible merging of data products from multiple ocean color missions. In the past year, much efforts have been on (a) understanding and correcting the artifacts appeared in the SeaWiFS-derived ocean and atmospheric produces; (b) developing an efficient method in generating the SeaWiFS aerosol lookup tables, (c) evaluating the effects of calibration error in the near-infrared (NIR) band to the atmospheric correction of the ocean color remote sensors, (d) comparing the aerosol correction algorithm using the singlescattering epsilon (the current SeaWiFS algorithm) vs. the multiple-scattering epsilon method, and (e) continuing on activities for the International Ocean-Color Coordinating Group (IOCCG) atmospheric correction working group. In this report, I will briefly present and discuss these and some other research activities.
Stress in childhood, adolescence and early adulthood, and cortisol levels in older age.
Harris, Mathew A; Cox, Simon R; Brett, Caroline E; Deary, Ian J; MacLullich, Alasdair M J
2017-03-01
The glucocorticoid hypothesis suggests that overexposure to stress may cause permanent upregulation of cortisol. Stress in youth may therefore influence cortisol levels even in older age. Using data from the 6-Day Sample, we investigated the effects of high stress in childhood, adolescence and early adulthood - as well as individual variables contributing to these measures; parental loss, social deprivation, school and home moves, illness, divorce and job instability - upon cortisol levels at age 77 years. Waking, waking +45 min (peak) and evening salivary cortisol samples were collected from 159 participants, and the 150 who were not using steroid medications were included in this study. After correcting for multiple comparisons, the only significant association was between early-adulthood job instability and later-life peak cortisol levels. After excluding participants with dementia or possible mild cognitive impairment, early-adulthood high stress showed significant associations with lower evening and mean cortisol levels, suggesting downregulation by stress, but these results did not survive correction for multiple comparisons. Overall, our results do not provide strong evidence of a relationship between stress in youth and later-life cortisol levels, but do suggest that some more long-term stressors, such as job instability, may indeed produce lasting upregulation of cortisol, persisting into the mid-to-late seventies.
Stress in childhood, adolescence and early adulthood, and cortisol levels in older age
Harris, Mathew A.; Cox, Simon R.; Brett, Caroline E.; Deary, Ian J.; MacLullich, Alasdair M. J.
2017-01-01
Abstract The glucocorticoid hypothesis suggests that overexposure to stress may cause permanent upregulation of cortisol. Stress in youth may therefore influence cortisol levels even in older age. Using data from the 6-Day Sample, we investigated the effects of high stress in childhood, adolescence and early adulthood – as well as individual variables contributing to these measures; parental loss, social deprivation, school and home moves, illness, divorce and job instability – upon cortisol levels at age 77 years. Waking, waking +45 min (peak) and evening salivary cortisol samples were collected from 159 participants, and the 150 who were not using steroid medications were included in this study. After correcting for multiple comparisons, the only significant association was between early-adulthood job instability and later-life peak cortisol levels. After excluding participants with dementia or possible mild cognitive impairment, early-adulthood high stress showed significant associations with lower evening and mean cortisol levels, suggesting downregulation by stress, but these results did not survive correction for multiple comparisons. Overall, our results do not provide strong evidence of a relationship between stress in youth and later-life cortisol levels, but do suggest that some more long-term stressors, such as job instability, may indeed produce lasting upregulation of cortisol, persisting into the mid-to-late seventies. PMID:28140738
Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing.
Butler, Andrew C; Roediger, Henry L
2008-04-01
Multiple-choice tests are used frequently in higher education without much consideration of the impact this form of assessment has on learning. Multiple-choice testing enhances retention of the material tested (the testing effect); however, unlike other tests, multiple-choice can also be detrimental because it exposes students to misinformation in the form of lures. The selection of lures can lead students to acquire false knowledge (Roediger & Marsh, 2005). The present research investigated whether feedback could be used to boost the positive effects and reduce the negative effects of multiple-choice testing. Subjects studied passages and then received a multiple-choice test with immediate feedback, delayed feedback, or no feedback. In comparison with the no-feedback condition, both immediate and delayed feedback increased the proportion of correct responses and reduced the proportion of intrusions (i.e., lure responses from the initial multiple-choice test) on a delayed cued recall test. Educators should provide feedback when using multiple-choice tests.
Implementation errors in the GingerALE Software: Description and recommendations.
Eickhoff, Simon B; Laird, Angela R; Fox, P Mickle; Lancaster, Jack L; Fox, Peter T
2017-01-01
Neuroscience imaging is a burgeoning, highly sophisticated field the growth of which has been fostered by grant-funded, freely distributed software libraries that perform voxel-wise analyses in anatomically standardized three-dimensional space on multi-subject, whole-brain, primary datasets. Despite the ongoing advances made using these non-commercial computational tools, the replicability of individual studies is an acknowledged limitation. Coordinate-based meta-analysis offers a practical solution to this limitation and, consequently, plays an important role in filtering and consolidating the enormous corpus of functional and structural neuroimaging results reported in the peer-reviewed literature. In both primary data and meta-analytic neuroimaging analyses, correction for multiple comparisons is a complex but critical step for ensuring statistical rigor. Reports of errors in multiple-comparison corrections in primary-data analyses have recently appeared. Here, we report two such errors in GingerALE, a widely used, US National Institutes of Health (NIH)-funded, freely distributed software package for coordinate-based meta-analysis. These errors have given rise to published reports with more liberal statistical inferences than were specified by the authors. The intent of this technical report is threefold. First, we inform authors who used GingerALE of these errors so that they can take appropriate actions including re-analyses and corrective publications. Second, we seek to exemplify and promote an open approach to error management. Third, we discuss the implications of these and similar errors in a scientific environment dependent on third-party software. Hum Brain Mapp 38:7-11, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Shimizu, Kazuhiro; Kosaka, Nobuyuki; Fujiwara, Yasuhiro; Matsuda, Tsuyoshi; Yamamoto, Tatsuya; Tsuchida, Tatsuro; Tsuchiyama, Katsuki; Oyama, Nobuyuki; Kimura, Hirohiko
2017-01-10
The importance of arterial transit time (ATT) correction for arterial spin labeling MRI has been well debated in neuroimaging, but it has not been well evaluated in renal imaging. The purpose of this study was to evaluate the feasibility of pulsed continuous arterial spin labeling (pcASL) MRI with multiple post-labeling delay (PLD) acquisition for measuring ATT-corrected renal blood flow (ATC-RBF). A total of 14 volunteers were categorized into younger (n = 8; mean age, 27.0 years) and older groups (n = 6; 64.8 years). Images of pcASL were obtained at three different PLDs (0.5, 1.0, and 1.5 s), and ATC-RBF and ATT were calculated using a single-compartment model. To validate ATC-RBF, a comparative study of effective renal plasma flow (ERPF) measured by 99m Tc-MAG3 scintigraphy was performed. ATC-RBF was corrected by kidney volume (ATC-cRBF) for comparison with ERPF. The younger group showed significantly higher ATC-RBF (157.68 ± 38.37 mL/min/100 g) and shorter ATT (961.33 ± 260.87 ms) than the older group (117.42 ± 24.03 mL/min/100 g and 1227.94 ± 226.51 ms, respectively; P < 0.05). A significant correlation was evident between ATC-cRBF and ERPF (P < 0.05, r = 0.47). With suboptimal single PLD (1.5 s) settings, there was no significant correlation between ERPF and kidney volume-corrected RBF calculated from single PLD data. Calculation of ATT and ATC-RBF by pcASL with multiple PLD was feasible in healthy volunteers, and differences in ATT and ATC-RBF were seen between the younger and older groups. Although ATT correction by multiple PLD acquisitions may not always be necessary for RBF quantification in the healthy subjects, the effect of ATT should be taken into account in renal ASL-MRI as debated in brain imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, V; Labby, Z; Culberson, W
Purpose: To determine whether body site-specific treatment plans form unique “plan class” clusters in a multi-dimensional analysis of plan complexity metrics such that a single beam quality correction determined for a representative plan could be universally applied within the “plan class”, thereby increasing the dosimetric accuracy of a detector’s response within a subset of similarly modulated nonstandard deliveries. Methods: We collected 95 clinical volumetric modulated arc therapy (VMAT) plans from four body sites (brain, lung, prostate, and spine). The lung data was further subdivided into SBRT and non-SBRT data for a total of five plan classes. For each control pointmore » in each plan, a variety of aperture-based complexity metrics were calculated and stored as unique characteristics of each patient plan. A multiple comparison of means analysis was performed such that every plan class was compared to every other plan class for every complexity metric in order to determine which groups could be considered different from one another. Statistical significance was assessed after correcting for multiple hypothesis testing. Results: Six out of a possible 10 pairwise plan class comparisons were uniquely distinguished based on at least nine out of 14 of the proposed metrics (Brain/Lung, Brain/SBRT lung, Lung/Prostate, Lung/SBRT Lung, Lung/Spine, Prostate/SBRT Lung). Eight out of 14 of the complexity metrics could distinguish at least six out of the possible 10 pairwise plan class comparisons. Conclusion: Aperture-based complexity metrics could prove to be useful tools to quantitatively describe a distinct class of treatment plans. Certain plan-averaged complexity metrics could be considered unique characteristics of a particular plan. A new approach to generating plan-class specific reference (pcsr) fields could be established through a targeted preservation of select complexity metrics or a clustering algorithm that identifies plans exhibiting similar modulation characteristics. Measurements and simulations will better elucidate potential plan-class specific dosimetry correction factors.« less
Amen, Daniel G; Hanks, Chris; Prunella, Jill R; Green, Aisa
2007-01-01
The authors explored differences in regional cerebral blood flow in 11 impulsive murderers and 11 healthy comparison subjects using single photon emission computed tomography. The authors assessed subjects at rest and during a computerized go/no-go concentration task. Using statistical parametric mapping software, the authors performed voxel-by-voxel t tests to assess significant differences, making family-wide error corrections for multiple comparisons. Murderers were found to have significantly lower relative rCBF during concentration, particularly in areas associated with concentration and impulse control. These results indicate that nonemotionally laden stimuli may result in frontotemporal dysregulation in people predisposed to impulsive violence.
Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W; Popp, Jürgen
2017-07-27
Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC.
Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W.; Popp, Jürgen
2017-01-01
Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC. PMID:28749450
Correcting for multiple-testing in multi-arm trials: is it necessary and is it done?
Wason, James M S; Stecher, Lynne; Mander, Adrian P
2014-09-17
Multi-arm trials enable the evaluation of multiple treatments within a single trial. They provide a way of substantially increasing the efficiency of the clinical development process. However, since multi-arm trials test multiple hypotheses, some regulators require that a statistical correction be made to control the chance of making a type-1 error (false-positive). Several conflicting viewpoints are expressed in the literature regarding the circumstances in which a multiple-testing correction should be used. In this article we discuss these conflicting viewpoints and review the frequency with which correction methods are currently used in practice. We identified all multi-arm clinical trials published in 2012 by four major medical journals. Summary data on several aspects of the trial design were extracted, including whether the trial was exploratory or confirmatory, whether a multiple-testing correction was applied and, if one was used, what type it was. We found that almost half (49%) of published multi-arm trials report using a multiple-testing correction. The percentage that corrected was higher for trials in which the experimental arms included multiple doses or regimens of the same treatments (67%). The percentage that corrected was higher in exploratory than confirmatory trials, although this is explained by a greater proportion of exploratory trials testing multiple doses and regimens of the same treatment. A sizeable proportion of published multi-arm trials do not correct for multiple-testing. Clearer guidance about whether multiple-testing correction is needed for multi-arm trials that test separate treatments against a common control group is required.
Werling, Donna M; Brand, Harrison; An, Joon-Yong; Stone, Matthew R; Zhu, Lingxue; Glessner, Joseph T; Collins, Ryan L; Dong, Shan; Layer, Ryan M; Markenscoff-Papadimitriou, Eirene; Farrell, Andrew; Schwartz, Grace B; Wang, Harold Z; Currall, Benjamin B; Zhao, Xuefang; Dea, Jeanselle; Duhn, Clif; Erdman, Carolyn A; Gilson, Michael C; Yadav, Rachita; Handsaker, Robert E; Kashin, Seva; Klei, Lambertus; Mandell, Jeffrey D; Nowakowski, Tomasz J; Liu, Yuwen; Pochareddy, Sirisha; Smith, Louw; Walker, Michael F; Waterman, Matthew J; He, Xin; Kriegstein, Arnold R; Rubenstein, John L; Sestan, Nenad; McCarroll, Steven A; Neale, Benjamin M; Coon, Hilary; Willsey, A Jeremy; Buxbaum, Joseph D; Daly, Mark J; State, Matthew W; Quinlan, Aaron R; Marth, Gabor T; Roeder, Kathryn; Devlin, Bernie; Talkowski, Michael E; Sanders, Stephan J
2018-05-01
Genomic association studies of common or rare protein-coding variation have established robust statistical approaches to account for multiple testing. Here we present a comparable framework to evaluate rare and de novo noncoding single-nucleotide variants, insertion/deletions, and all classes of structural variation from whole-genome sequencing (WGS). Integrating genomic annotations at the level of nucleotides, genes, and regulatory regions, we define 51,801 annotation categories. Analyses of 519 autism spectrum disorder families did not identify association with any categories after correction for 4,123 effective tests. Without appropriate correction, biologically plausible associations are observed in both cases and controls. Despite excluding previously identified gene-disrupting mutations, coding regions still exhibited the strongest associations. Thus, in autism, the contribution of de novo noncoding variation is probably modest in comparison to that of de novo coding variants. Robust results from future WGS studies will require large cohorts and comprehensive analytical strategies that consider the substantial multiple-testing burden.
NASA Astrophysics Data System (ADS)
Riedel, S.; Gege, P.; Schneider, M.; Pfug, B.; Oppelt, N.
2016-08-01
Atmospheric correction is a critical step and can be a limiting factor in the extraction of aquatic ecosystem parameters from remote sensing data of coastal and lake waters. Atmospheric correction models commonly in use for open ocean water and land surfaces can lead to large errors when applied to hyperspectral images taken from satellite or aircraft. The main problems arise from uncertainties in aerosol parameters and neglecting the adjacency effect, which originates from multiple scattering of upwelling radiance from the surrounding land. To better understand the challenges for developing an atmospheric correction model suitable for lakes, we compare atmospheric parameters derived from Sentinel- 2A and airborne hyperspectral data (HySpex) of two Bavarian lakes (Klostersee, Lake Starnberg) with in-situ measurements performed with RAMSES and Ibsen spectrometer systems and a Microtops sun photometer.
Measuring, modeling, and minimizing capacitances in heterojunction bipolar transistors
NASA Astrophysics Data System (ADS)
Anholt, R.; Bozada, C.; Dettmer, R.; Via, D.; Jenkins, T.; Barrette, J.; Ebel, J.; Havasy, C.; Sewell, J.; Quach, T.
1996-07-01
We demonstrate methods to separate junction and pad capacitances from on-wafer S-parameter measurements of HBTs with different areas and layouts. The measured junction capacitances are in good agreement with models, indicating that large-area devices are suitable for monitoring vendor epi-wafer doping. Measuring open HBTs does not give the correct pad capacitances. Finally, a capacitance comparison for a variety of layouts shows that bar-devices consistently give smaller base-collector values than multiple dot HBTs.
Voormolen, Eduard H.J.; Wei, Corie; Chow, Eva W.C.; Bassett, Anne S.; Mikulis, David J.; Crawley, Adrian P.
2011-01-01
Voxel-based morphometry (VBM) and automated lobar region of interest (ROI) volumetry are comprehensive and fast methods to detect differences in overall brain anatomy on magnetic resonance images. However, VBM and automated lobar ROI volumetry have detected dissimilar gray matter differences within identical image sets in our own experience and in previous reports. To gain more insight into how diverging results arise and to attempt to establish whether one method is superior to the other, we investigated how differences in spatial scale and in the need to statistically correct for multiple spatial comparisons influence the relative sensitivity of either technique to group differences in gray matter volumes. We assessed the performance of both techniques on a small dataset containing simulated gray matter deficits and additionally on a dataset of 22q11-deletion syndrome patients with schizophrenia (22q11DS-SZ) vs. matched controls. VBM was more sensitive to simulated focal deficits compared to automated ROI volumetry, and could detect global cortical deficits equally well. Moreover, theoretical calculations of VBM and ROI detection sensitivities to focal deficits showed that at increasing ROI size, ROI volumetry suffers more from loss in sensitivity than VBM. Furthermore, VBM and automated ROI found corresponding GM deficits in 22q11DS-SZ patients, except in the parietal lobe. Here, automated lobar ROI volumetry found a significant deficit only after a smaller subregion of interest was employed. Thus, sensitivity to focal differences is impaired relatively more by averaging over larger volumes in automated ROI methods than by the correction for multiple comparisons in VBM. These findings indicate that VBM is to be preferred over automated lobar-scale ROI volumetry for assessing gray matter volume differences between groups. PMID:19619660
Hubbard, Joanna K; Potts, Macy A; Couch, Brian A
2017-01-01
Assessments represent an important component of undergraduate courses because they affect how students interact with course content and gauge student achievement of course objectives. To make decisions on assessment design, instructors must understand the affordances and limitations of available question formats. Here, we use a crossover experimental design to identify differences in how multiple-true-false (MTF) and free-response (FR) exam questions reveal student thinking regarding specific conceptions. We report that correct response rates correlate across the two formats but that a higher percentage of students provide correct responses for MTF questions. We find that MTF questions reveal a high prevalence of students with mixed (correct and incorrect) conceptions, while FR questions reveal a high prevalence of students with partial (correct and unclear) conceptions. These results suggest that MTF question prompts can direct students to address specific conceptions but obscure nuances in student thinking and may overestimate the frequency of particular conceptions. Conversely, FR questions provide a more authentic portrait of student thinking but may face limitations in their ability to diagnose specific, particularly incorrect, conceptions. We further discuss an intrinsic tension between question structure and diagnostic capacity and how instructors might use multiple formats or hybrid formats to overcome these obstacles. © 2017 J. K. Hubbard et al. CBE—Life Sciences Education © 2017 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Comparing multilayer brain networks between groups: Introducing graph metrics and recommendations.
Mandke, Kanad; Meier, Jil; Brookes, Matthew J; O'Dea, Reuben D; Van Mieghem, Piet; Stam, Cornelis J; Hillebrand, Arjan; Tewarie, Prejaas
2018-02-01
There is an increasing awareness of the advantages of multi-modal neuroimaging. Networks obtained from different modalities are usually treated in isolation, which is however contradictory to accumulating evidence that these networks show non-trivial interdependencies. Even networks obtained from a single modality, such as frequency-band specific functional networks measured from magnetoencephalography (MEG) are often treated independently. Here, we discuss how a multilayer network framework allows for integration of multiple networks into a single network description and how graph metrics can be applied to quantify multilayer network organisation for group comparison. We analyse how well-known biases for single layer networks, such as effects of group differences in link density and/or average connectivity, influence multilayer networks, and we compare four schemes that aim to correct for such biases: the minimum spanning tree (MST), effective graph resistance cost minimisation, efficiency cost optimisation (ECO) and a normalisation scheme based on singular value decomposition (SVD). These schemes can be applied to the layers independently or to the multilayer network as a whole. For correction applied to whole multilayer networks, only the SVD showed sufficient bias correction. For correction applied to individual layers, three schemes (ECO, MST, SVD) could correct for biases. By using generative models as well as empirical MEG and functional magnetic resonance imaging (fMRI) data, we further demonstrated that all schemes were sensitive to identify network topology when the original networks were perturbed. In conclusion, uncorrected multilayer network analysis leads to biases. These biases may differ between centres and studies and could consequently lead to unreproducible results in a similar manner as for single layer networks. We therefore recommend using correction schemes prior to multilayer network analysis for group comparisons. Copyright © 2017 Elsevier Inc. All rights reserved.
Statistical testing and power analysis for brain-wide association study.
Gong, Weikang; Wan, Lin; Lu, Wenlian; Ma, Liang; Cheng, Fan; Cheng, Wei; Grünewald, Stefan; Feng, Jianfeng
2018-04-05
The identification of connexel-wise associations, which involves examining functional connectivities between pairwise voxels across the whole brain, is both statistically and computationally challenging. Although such a connexel-wise methodology has recently been adopted by brain-wide association studies (BWAS) to identify connectivity changes in several mental disorders, such as schizophrenia, autism and depression, the multiple correction and power analysis methods designed specifically for connexel-wise analysis are still lacking. Therefore, we herein report the development of a rigorous statistical framework for connexel-wise significance testing based on the Gaussian random field theory. It includes controlling the family-wise error rate (FWER) of multiple hypothesis testings using topological inference methods, and calculating power and sample size for a connexel-wise study. Our theoretical framework can control the false-positive rate accurately, as validated empirically using two resting-state fMRI datasets. Compared with Bonferroni correction and false discovery rate (FDR), it can reduce false-positive rate and increase statistical power by appropriately utilizing the spatial information of fMRI data. Importantly, our method bypasses the need of non-parametric permutation to correct for multiple comparison, thus, it can efficiently tackle large datasets with high resolution fMRI images. The utility of our method is shown in a case-control study. Our approach can identify altered functional connectivities in a major depression disorder dataset, whereas existing methods fail. A software package is available at https://github.com/weikanggong/BWAS. Copyright © 2018 Elsevier B.V. All rights reserved.
Thonusin, Chanisa; IglayReger, Heidi B; Soni, Tanu; Rothberg, Amy E; Burant, Charles F; Evans, Charles R
2017-11-10
In recent years, mass spectrometry-based metabolomics has increasingly been applied to large-scale epidemiological studies of human subjects. However, the successful use of metabolomics in this context is subject to the challenge of detecting biologically significant effects despite substantial intensity drift that often occurs when data are acquired over a long period or in multiple batches. Numerous computational strategies and software tools have been developed to aid in correcting for intensity drift in metabolomics data, but most of these techniques are implemented using command-line driven software and custom scripts which are not accessible to all end users of metabolomics data. Further, it has not yet become routine practice to assess the quantitative accuracy of drift correction against techniques which enable true absolute quantitation such as isotope dilution mass spectrometry. We developed an Excel-based tool, MetaboDrift, to visually evaluate and correct for intensity drift in a multi-batch liquid chromatography - mass spectrometry (LC-MS) metabolomics dataset. The tool enables drift correction based on either quality control (QC) samples analyzed throughout the batches or using QC-sample independent methods. We applied MetaboDrift to an original set of clinical metabolomics data from a mixed-meal tolerance test (MMTT). The performance of the method was evaluated for multiple classes of metabolites by comparison with normalization using isotope-labeled internal standards. QC sample-based intensity drift correction significantly improved correlation with IS-normalized data, and resulted in detection of additional metabolites with significant physiological response to the MMTT. The relative merits of different QC-sample curve fitting strategies are discussed in the context of batch size and drift pattern complexity. Our drift correction tool offers a practical, simplified approach to drift correction and batch combination in large metabolomics studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Comparison of aerodynamic models for Vertical Axis Wind Turbines
NASA Astrophysics Data System (ADS)
Simão Ferreira, C.; Aagaard Madsen, H.; Barone, M.; Roscher, B.; Deglaire, P.; Arduin, I.
2014-06-01
Multi-megawatt Vertical Axis Wind Turbines (VAWTs) are experiencing an increased interest for floating offshore applications. However, VAWT development is hindered by the lack of fast, accurate and validated simulation models. This work compares six different numerical models for VAWTS: a multiple streamtube model, a double-multiple streamtube model, the actuator cylinder model, a 2D potential flow panel model, a 3D unsteady lifting line model, and a 2D conformal mapping unsteady vortex model. The comparison covers rotor configurations with two NACA0015 blades, for several tip speed ratios, rotor solidity and fixed pitch angle, included heavily loaded rotors, in inviscid flow. The results show that the streamtube models are inaccurate, and that correct predictions of rotor power and rotor thrust are an effect of error cancellation which only occurs at specific configurations. The other four models, which explicitly model the wake as a system of vorticity, show mostly differences due to the instantaneous or time averaged formulation of the loading and flow, for which further research is needed.
Fitting Multimeric Protein Complexes into Electron Microscopy Maps Using 3D Zernike Descriptors
Esquivel-Rodríguez, Juan; Kihara, Daisuke
2012-01-01
A novel computational method for fitting high-resolution structures of multiple proteins into a cryoelectron microscopy map is presented. The method named EMLZerD generates a pool of candidate multiple protein docking conformations of component proteins, which are later compared with a provided electron microscopy (EM) density map to select the ones that fit well into the EM map. The comparison of docking conformations and the EM map is performed using the 3D Zernike descriptor (3DZD), a mathematical series expansion of three-dimensional functions. The 3DZD provides a unified representation of the surface shape of multimeric protein complex models and EM maps, which allows a convenient, fast quantitative comparison of the three dimensional structural data. Out of 19 multimeric complexes tested, near native complex structures with a root mean square deviation of less than 2.5 Å were obtained for 14 cases while medium range resolution structures with correct topology were computed for the additional 5 cases. PMID:22417139
Fitting multimeric protein complexes into electron microscopy maps using 3D Zernike descriptors.
Esquivel-Rodríguez, Juan; Kihara, Daisuke
2012-06-14
A novel computational method for fitting high-resolution structures of multiple proteins into a cryoelectron microscopy map is presented. The method named EMLZerD generates a pool of candidate multiple protein docking conformations of component proteins, which are later compared with a provided electron microscopy (EM) density map to select the ones that fit well into the EM map. The comparison of docking conformations and the EM map is performed using the 3D Zernike descriptor (3DZD), a mathematical series expansion of three-dimensional functions. The 3DZD provides a unified representation of the surface shape of multimeric protein complex models and EM maps, which allows a convenient, fast quantitative comparison of the three-dimensional structural data. Out of 19 multimeric complexes tested, near native complex structures with a root-mean-square deviation of less than 2.5 Å were obtained for 14 cases while medium range resolution structures with correct topology were computed for the additional 5 cases.
Medrano-Gracia, Pau; Cowan, Brett R; Bluemke, David A; Finn, J Paul; Kadish, Alan H; Lee, Daniel C; Lima, Joao A C; Suinesiaputra, Avan; Young, Alistair A
2013-09-13
Cardiovascular imaging studies generate a wealth of data which is typically used only for individual study endpoints. By pooling data from multiple sources, quantitative comparisons can be made of regional wall motion abnormalities between different cohorts, enabling reuse of valuable data. Atlas-based analysis provides precise quantification of shape and motion differences between disease groups and normal subjects. However, subtle shape differences may arise due to differences in imaging protocol between studies. A mathematical model describing regional wall motion and shape was used to establish a coordinate system registered to the cardiac anatomy. The atlas was applied to data contributed to the Cardiac Atlas Project from two independent studies which used different imaging protocols: steady state free precession (SSFP) and gradient recalled echo (GRE) cardiovascular magnetic resonance (CMR). Shape bias due to imaging protocol was corrected using an atlas-based transformation which was generated from a set of 46 volunteers who were imaged with both protocols. Shape bias between GRE and SSFP was regionally variable, and was effectively removed using the atlas-based transformation. Global mass and volume bias was also corrected by this method. Regional shape differences between cohorts were more statistically significant after removing regional artifacts due to imaging protocol bias. Bias arising from imaging protocol can be both global and regional in nature, and is effectively corrected using an atlas-based transformation, enabling direct comparison of regional wall motion abnormalities between cohorts acquired in separate studies.
Nestel, Paul J; Khan, Anmar A; Straznicky, Nora E; Mellett, Natalie A; Jayawardana, Kaushala; Mundra, Piyushkumar A; Lambert, Gavin W; Meikle, Peter J
2017-01-01
Plasma sphingolipids including ceramides, and gangliosides are associated with insulin resistance (IR) through effects on insulin signalling and glucose metabolism. Our studies of subjects with metabolic syndrome (MetS) showed close relationships between IR and sympathetic nervous system (SNS) activity including arterial norepinephrine (NE). We have therefore investigated possible associations of IR and SNS activity with complex lipids that are involved in both insulin sensitivity and neurotransmission. We performed a cross-sectional assessment of 23 lipid classes/subclasses (total 339 lipid species) by tandem mass spectrometry in 94 overweight untreated subjects with IR (quantified by HOMA-IR, Matsuda index and plasma insulin). Independently of IR parameters, several circulating complex lipids associated significantly with arterial NE and NEFA (non-esterified fatty acids) and marginally with heart rate (HR). After accounting for BMI, HOMA-IR, systolic BP, age, gender, and correction for multiple comparisons, these associations were significant (p < 0.05): NE with ceramide, phosphatidylcholine, alkyl- and alkenylphosphatidylcholine and free cholesterol; NEFA with mono- di- and trihexosylceramide, G M3 ganglioside, sphingomyelin, phosphatidylcholine, alkyl- and alkenylphosphatidylcholine, phosphatidylinositol and free cholesterol; HR marginally (p = or <0.1>0.05) with ceramide, G M3 ganglioside, sphingomyelin, lysophosphatidylcholine, phosphatidylinositol, lysophosphatidylinositol and free cholesterol. Multiple subspecies of these lipids significantly associated with NE and NEFA. None of the IR biomarkers associated significantly with lipid classes/subclasses after correction for multiple comparisons. This is the first demonstration that arterial norepinephrine and NEFA, that reflect both SNS activity and IR, associate significantly with circulating complex lipids independently of IR, suggesting a role for such lipids in neural mechanisms operating in MetS. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Higher Moments of Net-Kaon Multiplicity Distributions at STAR
NASA Astrophysics Data System (ADS)
Xu, Ji;
2017-01-01
Fluctuations of conserved quantities such as baryon number (B), electric charge number (Q), and strangeness number (S), are sensitive to the correlation length and can be used to probe non-gaussian fluctuations near the critical point. Experimentally, higher moments of the multiplicity distributions have been used to search for the QCD critical point in heavy-ion collisions. In this paper, we report the efficiency-corrected cumulants and their ratios of mid-rapidity (|y| < 0.5) net-kaon multiplicity distributions in Au+Au collisions at = 7.7, 11.5, 14.5, 19.6, 27, 39, 62.4, and 200 GeV collected in 2010, 2011, and 2014 with STAR at RHIC. The centrality and energy dependence of the cumulants and their ratios, are presented. Furthermore, the comparisons with baseline calculations (Poisson) and non-critical-point models (UrQMD) are also discussed.
NASA Technical Reports Server (NTRS)
Petrenko, Mariya; Kahn, Ralph; Chin, Mian; Limbacher, James
2017-01-01
Simulations of biomass burning (BB) emissions in global chemistry and aerosol transport models depend on external inventories, which provide location and strength of burning aerosol sources. Our previous work (Petrenko et al., 2012) shows that satellite snapshots of aerosol optical depth (AOD) near the emitted smoke plume can be used to constrain model-simulated AOD, and effectively, the assumed source strength. We now refine the satellite-snapshot method and investigate applying simple multiplicative emission correction factors for the widely used Global Fire Emission Database version 3 (GFEDv3) emission inventory can achieve regional-scale consistency between MODIS AOD snapshots and the Goddard Chemistry Aerosol Radiation and Transport (GOCART) model. The model and satellite AOD are compared over a set of more than 900 BB cases observed by the MODIS instrument during the 2004, and 2006-2008 biomass burning seasons. The AOD comparison presented here shows that regional discrepancies between the model and satellite are diverse around the globe yet quite consistent within most ecosystems. Additional analysis of including small fire emission correction shows the complimentary nature of correcting for source strength and adding missing sources, and also indicates that in some regions other factors may be significant in explaining model-satellite discrepancies. This work sets the stage for a larger intercomparison within the Aerosol Inter-comparisons between Observations and Models (AeroCom) multi-model biomass burning experiment. We discuss here some of the other possible factors affecting the remaining discrepancies between model simulations and observations, but await comparisons with other AeroCom models to draw further conclusions.
NASA Astrophysics Data System (ADS)
Dorman, L. I.; Iucci, N.; Pustil'Nik, L. A.; Sternlieb, A.; Villoresi, G.; Zukerman, I. G.
On the basis of cosmic ray hourly data obtained by NM of Emilio Segre' Observatory (hight 2025 m above s.l., cut-off rigidity for vertical direction 10.8 GV) we determine the snow effect in CR for total neutron intensity and for multiplicities m=1, m=2, m=3, m=4, m=5, m=6, and m=7. For comparison and excluding primary CR variations we use also hourly data on neutron multiplicities obtained by Rome NM (about sea level, cut-off rigidity 6.7 GV). In this paper we will analize effects of snow in periods from 4 January 2000 to 15 April 2000 with maximal absorption effect about 5%, and from 21 December 2000 up to 31 March 2001 with maximal effect 13% in the total neu- tron intensity. We use the periods without snow to determine regeression coefficients between primary CR variations observed by NM of Emilio Segre' Observatory, and by Rome NM. On the basis of obtained results we develop a method to correct data on snow effect by using several NM hourly data. On the basis of our data we estimate the accuracy with what can be made correction of NM data of stations where the snow effect can be important.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadler, D.A.; Sun, F.; Littlejohn, D.
1995-12-31
ICP-OES is a useful technique for multi-element analysis of soils. However, as a number of elements are present in relatively high concentrations, matrix interferences can occur and examples have been widely reported. The availability of CCD detectors has increased the opportunities for rapid multi-element, multi-wave-length determination of elemental concentrations in soils and other environmental samples. As the composition of soils from industrial sites can vary considerably, especially when taken from different pit horizons, procedures are required to assess the extent of interferences and correct the effects, on a simultaneous multi-element basis. In single element analysis, plasma operating conditions can sometimesmore » be varied to minimize or even remove multiplicative interferences. In simultaneous multi-element analysis, the scope for this approach may be limited, depending on the spectrochemical characteristics of the emitting analyte species. Matrix matching, by addition of major sample components to the analyte calibrant solutions, can be used to minimize inaccuracies. However, there are also limitations to this procedure, when the sample composition varies significantly. Multiplicative interference effects can also be assessed by a {open_quotes}single standard addition{close_quotes} of each analyte to the sample solution and the information obtained may be used to correct the analyte concentrations determined directly. Each of these approaches has been evaluated to ascertain the best procedure for multi-element analysis of industrial soils by ICP-OES with CCD detection at multiple wavelengths. Standard reference materials and field samples have been analyzed to illustrate the efficacy of each procedure.« less
Should "Multiple Imputations" Be Treated as "Multiple Indicators"?
ERIC Educational Resources Information Center
Mislevy, Robert J.
1993-01-01
Multiple imputations for latent variables are constructed so that analyses treating them as true variables have the correct expectations for population characteristics. Analyzing multiple imputations in accordance with their construction yields correct estimates of population characteristics, whereas analyzing them as multiple indicators generally…
Nelson, Peter M; Burns, Matthew K; Kanive, Rebecca; Ysseldyke, James E
2013-12-01
The current study used a randomized controlled trial to compare the effects of a practice-based intervention and a mnemonic strategy intervention on the retention and application of single-digit multiplication facts with 90 third- and fourth-grade students with math difficulties. Changes in retention and application were assessed separately using one-way ANCOVAs in which students' pretest scores were included as the covariate. Students in the practice-based intervention group had higher retention scores (expressed as the total number of digits correct per minute) relative to the control group. No statistically significant between-group differences were observed for application scores. Practical and theoretical implications for interventions targeting basic multiplication facts are discussed. © 2013.
Analysis of the Effects of Fixed Costs on Learning Curve Calculations
1994-09-01
Gansler, Jacques S . The Defense Industry. Cambridge MA: MIT Press, 1980. 11. Horngren , Charles T. and George Foster. Cost Accounting : A Managerial...Incorrect Total Cost Estimates and Comparison to Correct/Correct Total C o st E stim a te s ...7 1 12. Incorrect/Correct Total Cost Estimates and Comparison to Correct/Correct Total C o st E stim a te s
Smerbeck, A M; Parrish, J; Yeh, E A; Hoogs, M; Krupp, Lauren B; Weinstock-Guttman, B; Benedict, R H B
2011-04-01
The Brief Visuospatial Memory Test - Revised (BVMTR) and the Symbol Digit Modalities Test (SDMT) oral-only administration are known to be sensitive to cerebral disease in adult samples, but pediatric norms are not available. A demographically balanced sample of healthy control children (N = 92) ages 6-17 was tested with the BVMTR and SDMT. Multiple regression analysis (MRA) was used to develop demographically controlled normative equations. This analysis provided equations that were then used to construct demographically adjusted z-scores for the BVMTR Trial 1, Trial 2, Trial 3, Total Learning, and Delayed Recall indices, as well as the SDMT total correct score. To demonstrate the utility of this approach, a comparison group of children with acute disseminated encephalomyelitis (ADEM) or multiple sclerosis (MS) were also assessed. We find that these visual processing tests discriminate neurological patients from controls. As the tests are validated in adult multiple sclerosis, they are likely to be useful in monitoring pediatric onset multiple sclerosis patients as they transition into adulthood.
NASA Astrophysics Data System (ADS)
Kang, Ziho
This dissertation is divided into four parts: 1) Development of effective methods for comparing visual scanning paths (or scanpaths) for a dynamic task of multiple moving targets, 2) application of the methods to compare the scanpaths of experts and novices for a conflict detection task of multiple aircraft on radar screen, 3) a post-hoc analysis of other eye movement characteristics of experts and novices, and 4) finding out whether the scanpaths of experts can be used to teach the novices. In order to compare experts' and novices' scanpaths, two methods are developed. The first proposed method is the matrix comparisons using the Mantel test. The second proposed method is the maximum transition-based agglomerative hierarchical clustering (MTAHC) where comparisons of multi-level visual groupings are held out. The matrix comparison method was useful for a small number of targets during the preliminary experiment, but turned out to be inapplicable to a realistic case when tens of aircraft were presented on screen; however, MTAHC was effective with large number of aircraft on screen. The experiments with experts and novices on the aircraft conflict detection task showed that their scanpaths are different. The MTAHC result was able to explicitly show how experts visually grouped multiple aircraft based on similar altitudes while novices tended to group them based on convergence. Also, the MTAHC results showed that novices paid much attention to the converging aircraft groups even if they are safely separated by altitude; therefore, less attention was given to the actual conflicting pairs resulting in low correct conflict detection rates. Since the analysis showed the scanpath differences, experts' scanpaths were shown to novices in order to find out its effectiveness. The scanpath treatment group showed indications that they changed their visual movements from trajectory-based to altitude-based movements. Between the treatment and the non-treatment group, there were no significant differences in terms of number of correct detections; however, the treatment group made significantly fewer false alarms.
`Dem DEMs: Comparing Methods of Digital Elevation Model Creation
NASA Astrophysics Data System (ADS)
Rezza, C.; Phillips, C. B.; Cable, M. L.
2017-12-01
Topographic details of Europa's surface yield implications for large-scale processes that occur on the moon, including surface strength, modification, composition, and formation mechanisms for geologic features. In addition, small scale details presented from this data are imperative for future exploration of Europa's surface, such as by a potential Europa Lander mission. A comparison of different methods of Digital Elevation Model (DEM) creation and variations between them can help us quantify the relative accuracy of each model and improve our understanding of Europa's surface. In this work, we used data provided by Phillips et al. (2013, AGU Fall meeting, abs. P34A-1846) and Schenk and Nimmo (2017, in prep.) to compare DEMs that were created using Ames Stereo Pipeline (ASP), SOCET SET, and Paul Schenk's own method. We began by locating areas of the surface with multiple overlapping DEMs, and our initial comparisons were performed near the craters Manannan, Pwyll, and Cilix. For each region, we used ArcGIS to draw profile lines across matching features to determine elevation. Some of the DEMs had vertical or skewed offsets, and thus had to be corrected. The vertical corrections were applied by adding or subtracting the global minimum of the data set to create a common zero-point. The skewed data sets were corrected by rotating the plot so that it had a global slope of zero and then subtracting for a zero-point vertical offset. Once corrections were made, we plotted the three methods on one graph for each profile of each region. Upon analysis, we found relatively good feature correlation between the three methods. The smoothness of a DEM depends on both the input set of images and the stereo processing methods used. In our comparison, the DEMs produced by SOCET SET were less smoothed than those from ASP or Schenk. Height comparisons show that ASP and Schenk's model appear similar, alternating in maximum height. SOCET SET has more topographic variability due to its decreased smoothing, which is borne out by preliminary offset calculations. In the future, we plan to expand upon this preliminary work with more regions of Europa, continue quantifying the height differences and relative accuracy of each method, and generate more DEMs to expand our available comparison regions.
Weinberg, W A; McLean, A; Snider, R L; Rintelmann, J W; Brumback, R A
1989-12-01
Eight groups of learning disabled children (N = 100), categorized by the clinical Lexical Paradigm as good readers or poor readers, were individually administered the Gilmore Oral Reading Test, Form D, by one of four input/retrieval methods: (1) the standardized method of administration in which the child reads each paragraph aloud and then answers five questions relating to the paragraph [read/recall method]; (2) the child reads each paragraph aloud and then for each question selects the correct answer from among three choices read by the examiner [read/choice method]; (3) the examiner reads each paragraph aloud and reads each of the five questions to the child to answer [listen/recall method]; and (4) the examiner reads each paragraph aloud and then for each question reads three multiple-choice answers from which the child selects the correct answer [listen/choice method]. The major difference in scores was between the groups tested by the recall versus the orally read multiple-choice methods. This study indicated that poor readers who listened to the material and were tested by orally read multiple-choice format could perform as well as good readers. The performance of good readers was not affected by listening or by the method of testing. The multiple-choice testing improved the performance of poor readers independent of the input method. This supports the arguments made previously that a "bypass approach" to education of poor readers in which testing is accomplished using an orally read multiple-choice format can enhance the child's school performance on reading-related tasks. Using a listening while reading input method may further enhance performance.
Wafer hotspot prevention using etch aware OPC correction
NASA Astrophysics Data System (ADS)
Hamouda, Ayman; Power, Dave; Salama, Mohamed; Chen, Ao
2016-03-01
As technology development advances into deep-sub-wavelength nodes, multiple patterning is becoming more essential to achieve the technology shrink requirements. Recently, Optical Proximity Correction (OPC) technology has proposed simultaneous correction of multiple mask-patterns to enable multiple patterning awareness during OPC correction. This is essential to prevent inter-layer hot-spots during the final pattern transfer. In state-of-art literature, multi-layer awareness is achieved using simultaneous resist-contour simulations to predict and correct for hot-spots during mask generation. However, this approach assumes a uniform etch shrink response for all patterns independent of their proximity, which isn't sufficient for the full prevention of inter-exposure hot-spot, for example different color space violations post etch or via coverage/enclosure post etch. In this paper, we explain the need to include the etch component during multiple patterning OPC. We also introduce a novel approach for Etch-aware simultaneous Multiple-patterning OPC, where we calibrate and verify a lumped model that includes the combined resist and etch responses. Adding this extra simulation condition during OPC is suitable for full chip processing from a computation intensity point of view. Also, using this model during OPC to predict and correct inter-exposures hot-spots is similar to previously proposed multiple-patterning OPC, yet our proposed approach more accurately corrects post-etch defects too.
Avoiding false discoveries in association studies.
Sabatti, Chiara
2007-01-01
We consider the problem of controlling false discoveries in association studies. We assume that the design of the study is adequate so that the "false discoveries" are potentially only because of random chance, not to confounding or other flaws. Under this premise, we review the statistical framework for hypothesis testing and correction for multiple comparisons. We consider in detail the currently accepted strategies in linkage analysis. We then examine the underlying similarities and differences between linkage and association studies and document some of the most recent methodological developments for association mapping.
NASA Technical Reports Server (NTRS)
Khandelwal, Govind S.; Khan, Ferdous
1989-01-01
An optical model description of energy and momentum transfer in relativistic heavy-ion collisions, based upon composite particle multiple scattering theory, is presented. Transverse and longitudinal momentum transfers to the projectile are shown to arise from the real and absorptive part of the optical potential, respectively. Comparisons of fragment momentum distribution observables with experiments are made and trends outlined based on our knowledge of the underlying nucleon-nucleon interaction. Corrections to the above calculations are discussed. Finally, use of the model as a tool for estimating collision impact parameters is indicated.
SU-F-T-180: Evaluation of a Scintillating Screen Detector for Proton Beam QA and Acceptance Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghebremedhin, A; Taber, M; Koss, P
2016-06-15
Purpose: To test the performance of a commercial scintillating screen detector for acceptance testing and Quality Assurance of a proton pencil beam scanning system. Method: The detector (Lexitek DRD 400) has 40cm × 40cm field, uses a thin scintillator imaged onto a 16-bit scientific CCD with ∼0.5mm resolution. A grid target and LED illuminators are provided for spatial calibration and relative gain correction. The detector mounts to the nozzle with micron precision. Tools are provided for image processing and analysis of single or multiple Gaussian spots. Results: The bias and gain of the detector were studied to measure repeatability andmore » accuracy. Gain measurements were taken with the LED illuminators to measure repeatability and variation of the lens-CCD pair as a function with f-stop. Overall system gain was measured with a passive scattering (broad) beam whose shape is calibrated with EDR film placed in front of the scintillator. To create a large uniform field, overlapping small fields were recorded with the detector translated laterally and stitched together to cover the full field. Due to the long exposures required to obtain multiple spills of the synchrotron and very high detector sensitivity, borated polyethylene shielding was added to reduce direct radiation events hitting the CCD. Measurements with a micro ion chamber were compared to the detector’s spot profile. Software was developed to process arrays of Gaussian spots and to correct for radiation events. Conclusion: The detector background has a fixed bias, a small component linear in time, and is easily corrected. The gain correction method was validated with 2% accuracy. The detector spot profile matches the micro ion chamber data over 4 orders of magnitude. The multiple spot analyses can be easily used with plan data for measuring pencil beam uniformity and for regular QA comparison.« less
All of the above: When multiple correct response options enhance the testing effect.
Bishara, Anthony J; Lanzo, Lauren A
2015-01-01
Previous research has shown that multiple choice tests often improve memory retention. However, the presence of incorrect lures often attenuates this memory benefit. The current research examined the effects of "all of the above" (AOTA) options. When such options are correct, no incorrect lures are present. In the first three experiments, a correct AOTA option on an initial test led to a larger memory benefit than no test and standard multiple choice test conditions. The benefits of a correct AOTA option occurred even without feedback on the initial test; for both 5-minute and 48-hour retention delays; and for both cued recall and multiple choice final test formats. In the final experiment, an AOTA question led to better memory retention than did a control condition that had identical timing and exposure to response options. However, the benefits relative to this control condition were similar regardless of the type of multiple choice test (AOTA or not). Results suggest that retrieval contributes to multiple choice testing effects. However, the extra testing effect from a correct AOTA option, rather than being due to more retrieval, might be due simply to more exposure to correct information.
Variability Search in GALFACTS
NASA Astrophysics Data System (ADS)
Kania, Joseph; Wenger, Trey; Ghosh, Tapasi; Salter, Christopher J.
2015-01-01
The Galactic ALFA Continuum Transit Survey (GALFACTS) is an all-Arecibo-sky survey using the seven-beam Arecibo L-band Feed Array (ALFA). The Survey is centered at 1.375 GHz with 300-MHz bandwidth, and measures all four Stokes parameters. We are looking for compact sources that vary in intensity or polarization on timescales of about a month via intra-survey comparisons and long term variations through comparisons with the NRAO VLA Sky Survey. Data processing includes locating and rejecting radio frequency interference, recognizing sources, two-dimensional Gaussian fitting to multiple cuts through the same source, and gain corrections. Our Python code is being used on the calibrations sources observed in conjunction with the survey measurements to determine the calibration parameters that will then be applied to data for the main field.
Population entropies estimates of proteins
NASA Astrophysics Data System (ADS)
Low, Wai Yee
2017-05-01
The Shannon entropy equation provides a way to estimate variability of amino acids sequences in a multiple sequence alignment of proteins. Knowledge of protein variability is useful in many areas such as vaccine design, identification of antibody binding sites, and exploration of protein 3D structural properties. In cases where the population entropies of a protein are of interest but only a small sample size can be obtained, a method based on linear regression and random subsampling can be used to estimate the population entropy. This method is useful for comparisons of entropies where the actual sequence counts differ and thus, correction for alignment size bias is needed. In the current work, an R based package named EntropyCorrect that enables estimation of population entropy is presented and an empirical study on how well this new algorithm performs on simulated dataset of various combinations of population and sample sizes is discussed. The package is available at https://github.com/lloydlow/EntropyCorrect. This article, which was originally published online on 12 May 2017, contained an error in Eq. (1), where the summation sign was missing. The corrected equation appears in the Corrigendum attached to the pdf.
An introduction to multiplicity issues in clinical trials: the what, why, when and how.
Li, Guowei; Taljaard, Monica; Van den Heuvel, Edwin R; Levine, Mitchell Ah; Cook, Deborah J; Wells, George A; Devereaux, Philip J; Thabane, Lehana
2017-04-01
In clinical trials it is not uncommon to face a multiple testing problem which can have an impact on both type I and type II error rates, leading to inappropriate interpretation of trial results. Multiplicity issues may need to be considered at the design, analysis and interpretation stages of a trial. The proportion of trial reports not adequately correcting for multiple testing remains substantial. The purpose of this article is to provide an introduction to multiple testing issues in clinical trials, and to reduce confusion around the need for multiplicity adjustments. We use a tutorial, question-and-answer approach to address the key issues of why, when and how to consider multiplicity adjustments in trials. We summarize the relevant circumstances under which multiplicity adjustments ought to be considered, as well as options for carrying out multiplicity adjustments in terms of trial design factors including Population, Intervention/Comparison, Outcome, Time frame and Analysis (PICOTA). Results are presented in an easy-to-use table and flow diagrams. Confusion about multiplicity issues can be reduced or avoided by considering the potential impact of multiplicity on type I and II errors and, if necessary pre-specifying statistical approaches to either avoid or adjust for multiplicity in the trial protocol or analysis plan. © The Author 2016; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.
NASA Astrophysics Data System (ADS)
Wong, Erwin
2000-03-01
Traditional methods of linear based imaging limits the viewer to a single fixed-point perspective. By means of a single lens multiple perspective mirror system, a 360-degree representation of the area around the camera is reconstructed. This reconstruction is used overcome the limitations of a traditional camera by providing the viewer with many different perspectives. By constructing the mirror into a hemispherical surface with multiple focal lengths at various diameters on the mirror, and by placing a parabolic mirror overhead, a stereoscopic image can be extracted from the image captured by a high-resolution camera placed beneath the mirror. Image extraction and correction is made by computer processing of the image obtained by camera; the image present up to five distinguishable different viewpoints that a computer can extrapolate pseudo- perspective data from. Geometric and depth for field can be extrapolated via comparison and isolation of objects within a virtual scene post processed by the computer. Combining data with scene rendering software provides the viewer with the ability to choose a desired viewing position, multiple dynamic perspectives, and virtually constructed perspectives based on minimal existing data. An examination into the workings of the mirror relay system is provided, including possible image extrapolation and correctional methods. Generation of data and virtual interpolated and constructed data is also mentioned.
Streiner, David L
2015-10-01
Testing many null hypotheses in a single study results in an increased probability of detecting a significant finding just by chance (the problem of multiplicity). Debates have raged over many years with regard to whether to correct for multiplicity and, if so, how it should be done. This article first discusses how multiple tests lead to an inflation of the α level, then explores the following different contexts in which multiplicity arises: testing for baseline differences in various types of studies, having >1 outcome variable, conducting statistical tests that produce >1 P value, taking multiple "peeks" at the data, and unplanned, post hoc analyses (i.e., "data dredging," "fishing expeditions," or "P-hacking"). It then discusses some of the methods that have been proposed for correcting for multiplicity, including single-step procedures (e.g., Bonferroni); multistep procedures, such as those of Holm, Hochberg, and Šidák; false discovery rate control; and resampling approaches. Note that these various approaches describe different aspects and are not necessarily mutually exclusive. For example, resampling methods could be used to control the false discovery rate or the family-wise error rate (as defined later in this article). However, the use of one of these approaches presupposes that we should correct for multiplicity, which is not universally accepted, and the article presents the arguments for and against such "correction." The final section brings together these threads and presents suggestions with regard to when it makes sense to apply the corrections and how to do so. © 2015 American Society for Nutrition.
Grayscale inhomogeneity correction method for multiple mosaicked electron microscope images
NASA Astrophysics Data System (ADS)
Zhou, Fangxu; Chen, Xi; Sun, Rong; Han, Hua
2018-04-01
Electron microscope image stitching is highly desired to acquire microscopic resolution images of large target scenes in neuroscience. However, the result of multiple Mosaicked electron microscope images may exist severe gray scale inhomogeneity due to the instability of the electron microscope system and registration errors, which degrade the visual effect of the mosaicked EM images and aggravate the difficulty of follow-up treatment, such as automatic object recognition. Consequently, the grayscale correction method for multiple mosaicked electron microscope images is indispensable in these areas. Different from most previous grayscale correction methods, this paper designs a grayscale correction process for multiple EM images which tackles the difficulty of the multiple images monochrome correction and achieves the consistency of grayscale in the overlap regions. We adjust overall grayscale of the mosaicked images with the location and grayscale information of manual selected seed images, and then fuse local overlap regions between adjacent images using Poisson image editing. Experimental result demonstrates the effectiveness of our proposed method.
Magnetic Resonance Fingerprinting of Adult Brain Tumors: Initial Experience
Badve, Chaitra; Yu, Alice; Dastmalchian, Sara; Rogers, Matthew; Ma, Dan; Jiang, Yun; Margevicius, Seunghee; Pahwa, Shivani; Lu, Ziang; Schluchter, Mark; Sunshine, Jeffrey; Griswold, Mark; Sloan, Andrew; Gulani, Vikas
2016-01-01
Background Magnetic resonance fingerprinting (MRF) allows rapid simultaneous quantification of T1 and T2 relaxation times. This study assesses the utility of MRF in differentiating between common types of adult intra-axial brain tumors. Methods MRF acquisition was performed in 31 patients with untreated intra-axial brain tumors: 17 glioblastomas, 6 WHO grade II lower-grade gliomas and 8 metastases. T1, T2 of the solid tumor (ST), immediate peritumoral white matter (PW), and contralateral white matter (CW) were summarized within each region of interest. Statistical comparisons on mean, standard deviation, skewness and kurtosis were performed using univariate Wilcoxon rank sum test across various tumor types. Bonferroni correction was used to correct for multiple comparisons testing. Multivariable logistic regression analysis was performed for discrimination between glioblastomas and metastases and area under the receiver operator curve (AUC) was calculated. Results Mean T2 values could differentiate solid tumor regions of lower-grade gliomas from metastases (mean±sd: 172±53ms and 105±27ms respectively, p =0.004, significant after Bonferroni correction). Mean T1 of PW surrounding lower-grade gliomas differed from PW around glioblastomas (mean±sd: 1066±218ms and 1578±331ms respectively, p=0.004, significant after Bonferroni correction). Logistic regression analysis revealed that mean T2 of ST offered best separation between glioblastomas and metastases with AUC of 0.86 (95% CI 0.69–1.00, p<0.0001). Conclusion MRF allows rapid simultaneous T1, T2 measurement in brain tumors and surrounding tissues. MRF based relaxometry can identify quantitative differences between solid-tumor regions of lower grade gliomas and metastases and between peritumoral regions of glioblastomas and lower grade gliomas. PMID:28034994
MR Fingerprinting of Adult Brain Tumors: Initial Experience.
Badve, C; Yu, A; Dastmalchian, S; Rogers, M; Ma, D; Jiang, Y; Margevicius, S; Pahwa, S; Lu, Z; Schluchter, M; Sunshine, J; Griswold, M; Sloan, A; Gulani, V
2017-03-01
MR fingerprinting allows rapid simultaneous quantification of T1 and T2 relaxation times. This study assessed the utility of MR fingerprinting in differentiating common types of adult intra-axial brain tumors. MR fingerprinting acquisition was performed in 31 patients with untreated intra-axial brain tumors: 17 glioblastomas, 6 World Health Organization grade II lower grade gliomas, and 8 metastases. T1, T2 of the solid tumor, immediate peritumoral white matter, and contralateral white matter were summarized within each ROI. Statistical comparisons on mean, SD, skewness, and kurtosis were performed by using the univariate Wilcoxon rank sum test across various tumor types. Bonferroni correction was used to correct for multiple-comparison testing. Multivariable logistic regression analysis was performed for discrimination between glioblastomas and metastases, and area under the receiver operator curve was calculated. Mean T2 values could differentiate solid tumor regions of lower grade gliomas from metastases (mean, 172 ± 53 ms, and 105 ± 27 ms, respectively; P = .004, significant after Bonferroni correction). The mean T1 of peritumoral white matter surrounding lower grade gliomas differed from peritumoral white matter around glioblastomas (mean, 1066 ± 218 ms, and 1578 ± 331 ms, respectively; P = .004, significant after Bonferroni correction). Logistic regression analysis revealed that the mean T2 of solid tumor offered the best separation between glioblastomas and metastases with an area under the curve of 0.86 (95% CI, 0.69-1.00; P < .0001). MR fingerprinting allows rapid simultaneous T1 and T2 measurement in brain tumors and surrounding tissues. MR fingerprinting-based relaxometry can identify quantitative differences between solid tumor regions of lower grade gliomas and metastases and between peritumoral regions of glioblastomas and lower grade gliomas. © 2017 by American Journal of Neuroradiology.
The commercial use of satellite data to monitor the potato crop in the Columbia Basin
NASA Technical Reports Server (NTRS)
Waddington, George R., Jr.; Lamb, Frank G.
1990-01-01
The imaging of potato crops with satellites is described and evaluated in terms of the commercial application of the remotely sensed data. The identification and analysis of the crops is accomplished with multiple images acquired from the Landsat MSS and TM systems. The data are processed on a PC with image-procesing software which produces images of the seven 1024 x 1024 pixel windows which are subdivided into 21 512 x 512 pixel windows. Maximization of imaged data throughout the year aids in the identification of crop types by IR reflectance. The classification techniques involve the use of six or seven spectral classes for particular image dates. Comparisons with ground-truth data show good agreement; for example, potato fields are identified correctly 90 percent of the time. Acreage estimates and crop-condition assessments can be made from satellite data and used for corrective agricultural action.
ITG: A New Global GNSS Tropospheric Correction Model
Yao, Yibin; Xu, Chaoqian; Shi, Junbo; Cao, Na; Zhang, Bao; Yang, Junjian
2015-01-01
Tropospheric correction models are receiving increasing attentions, as they play a crucial role in Global Navigation Satellite System (GNSS). Most commonly used models to date include the GPT2 series and the TropGrid2. In this study, we analyzed the advantages and disadvantages of existing models and developed a new model called the Improved Tropospheric Grid (ITG). ITG considers annual, semi-annual and diurnal variations, and includes multiple tropospheric parameters. The amplitude and initial phase of diurnal variation are estimated as a periodic function. ITG provides temperature, pressure, the weighted mean temperature (Tm) and Zenith Wet Delay (ZWD). We conducted a performance comparison among the proposed ITG model and previous ones, in terms of meteorological measurements from 698 observation stations, Zenith Total Delay (ZTD) products from 280 International GNSS Service (IGS) station and Tm from Global Geodetic Observing System (GGOS) products. Results indicate that ITG offers the best performance on the whole. PMID:26196963
Chen, Xiwei; Yu, Jihnhee
2014-01-01
Abstract Many clinical and biomedical studies evaluate treatment effects based on multiple biomarkers that commonly consist of pre- and post-treatment measurements. Some biomarkers can show significant positive treatment effects, while other biomarkers can reflect no effects or even negative effects of the treatments, giving rise to a necessity to develop methodologies that may correctly and efficiently evaluate the treatment effects based on multiple biomarkers as a whole. In the setting of pre- and post-treatment measurements of multiple biomarkers, we propose to apply a receiver operating characteristic (ROC) curve methodology based on the best combination of biomarkers maximizing the area under the receiver operating characteristic curve (AUC)-type criterion among all possible linear combinations. In the particular case with independent pre- and post-treatment measurements, we show that the proposed method represents the well-known Su and Liu's (1993) result. Further, proceeding from derived best combinations of biomarkers' measurements, we propose an efficient technique via likelihood ratio tests to compare treatment effects. We show an extensive Monte Carlo study that confirms the superiority of the proposed test in comparison with treatment effects based on multiple biomarkers in a paired data setting. For practical applications, the proposed method is illustrated with a randomized trial of chlorhexidine gluconate on oral bacterial pathogens in mechanically ventilated patients as well as a treatment study for children with attention deficit-hyperactivity disorder and severe mood dysregulation. PMID:25019920
Barroso, Teresa G; Martins, Rui C; Fernandes, Elisabete; Cardoso, Susana; Rivas, José; Freitas, Paulo P
2018-02-15
Tuberculosis is one of the major public health concerns. This highly contagious disease affects more than 10.4 million people, being a leading cause of morbidity by infection. Tuberculosis is diagnosed at the point-of-care by the Ziehl-Neelsen sputum smear microscopy test. Ziehl-Neelsen is laborious, prone to human error and infection risk, with a limit of detection of 10 4 cells/mL. In resource-poor nations, a more practical test, with lower detection limit, is paramount. This work uses a magnetoresistive biosensor to detect BCG bacteria for tuberculosis diagnosis. Herein we report: i) nanoparticle assembly method and specificity for tuberculosis detection; ii) demonstration of proportionality between BCG cell concentration and magnetoresistive voltage signal; iii) application of multiplicative signal correction for systematic effects removal; iv) investigation of calibration effectiveness using chemometrics methods; and v) comparison with state-of-the-art point-of-care tuberculosis biosensors. Results present a clear correspondence between voltage signal and cell concentration. Multiplicative signal correction removes baseline shifts within and between biochip sensors, allowing accurate and precise voltage signal between different biochips. The corrected signal was used for multivariate regression models, which significantly decreased the calibration standard error from 0.50 to 0.03log 10 (cells/mL). Results show that Ziehl-Neelsen detection limits and below are achievable with the magnetoresistive biochip, when pre-processing and chemometrics are used. Copyright © 2017 Elsevier B.V. All rights reserved.
A General Simulation Method for Multiple Bodies in Proximate Flight
NASA Technical Reports Server (NTRS)
Meakin, Robert L.
2003-01-01
Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.
High dose rate brachytherapy source measurement intercomparison.
Poder, Joel; Smith, Ryan L; Shelton, Nikki; Whitaker, May; Butler, Duncan; Haworth, Annette
2017-06-01
This work presents a comparison of air kerma rate (AKR) measurements performed by multiple radiotherapy centres for a single HDR 192 Ir source. Two separate groups (consisting of 15 centres) performed AKR measurements at one of two host centres in Australia. Each group travelled to one of the host centres and measured the AKR of a single 192 Ir source using their own equipment and local protocols. Results were compared to the 192 Ir source calibration certificate provided by the manufacturer by means of a ratio of measured to certified AKR. The comparisons showed remarkably consistent results with the maximum deviation in measurement from the decay-corrected source certificate value being 1.1%. The maximum percentage difference between any two measurements was less than 2%. The comparisons demonstrated the consistency of well-chambers used for 192 Ir AKR measurements in Australia, despite the lack of a local calibration service, and served as a valuable focal point for the exchange of ideas and dosimetry methods.
NASA Astrophysics Data System (ADS)
Lockhart, M.; Henzlova, D.; Croft, S.; Cutler, T.; Favalli, A.; McGahee, Ch.; Parker, R.
2018-01-01
Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli(DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory and implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. The current paper discusses and presents the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. In order to assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. The DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.
A Comparison of Off-Level Correction Techniques for Airborne Gravity using GRAV-D Re-Flights
NASA Astrophysics Data System (ADS)
Preaux, S. A.; Melachroinos, S.; Diehl, T. M.
2011-12-01
The airborne gravity data collected for the GRAV-D project contain a number of tracks which have been flown multiple times, either by design or due to data collection issues. Where viable data can be retrieved, these re-flights are a valuable resource not only for assessing the quality of the data but also for evaluating the relative effectiveness of various processing techniques. Correcting for the instantaneous misalignment of the gravimeter sensitive axis with local vertical has been a long standing challenge for stable platform airborne gravimetry. GRAV-D re-flights are used to compare the effectiveness of existing methods of computing this off-level correction (Valliant 1991, Peters and Brozena 1995, Swain 1996, etc.) and to assess the impact of possible modifications to these methods including pre-filtering accelerations, use of IMU horizontal accelerations in place of those derived from GPS positions and accurately compensating for GPS lever-arm and attitude effects prior to computing accelerations from the GPS positions (Melachroinos et al. 2010, B. de Saint-Jean, et al. 2005). The resulting corrected gravity profiles are compared to each other and to EGM08 in order to assess the accuracy and precision of each method. Preliminary results indicate that the methods presented in Peters & Brozena 1995 and Valliant 1991 completely correct the off-level error some of the time but only partially correct it others, while introducing an overall bias to the data of -0.5 to -2 mGal.
NASA Astrophysics Data System (ADS)
Saturno, Jorge; Pöhlker, Christopher; Massabò, Dario; Brito, Joel; Carbone, Samara; Cheng, Yafang; Chi, Xuguang; Ditas, Florian; Hrabě de Angelis, Isabella; Morán-Zuloaga, Daniel; Pöhlker, Mira L.; Rizzo, Luciana V.; Walter, David; Wang, Qiaoqiao; Artaxo, Paulo; Prati, Paolo; Andreae, Meinrat O.
2017-08-01
Deriving absorption coefficients from Aethalometer attenuation data requires different corrections to compensate for artifacts related to filter-loading effects, scattering by filter fibers, and scattering by aerosol particles. In this study, two different correction schemes were applied to seven-wavelength Aethalometer data, using multi-angle absorption photometer (MAAP) data as a reference absorption measurement at 637 nm. The compensation algorithms were compared to five-wavelength offline absorption measurements obtained with a multi-wavelength absorbance analyzer (MWAA), which serves as a multiple-wavelength reference measurement. The online measurements took place in the Amazon rainforest, from the wet-to-dry transition season to the dry season (June-September 2014). The mean absorption coefficient (at 637 nm) during this period was 1.8 ± 2.1 Mm-1, with a maximum of 15.9 Mm-1. Under these conditions, the filter-loading compensation was negligible. One of the correction schemes was found to artificially increase the short-wavelength absorption coefficients. It was found that accounting for the aerosol optical properties in the scattering compensation significantly affects the absorption Ångström exponent (åABS) retrievals. Proper Aethalometer data compensation schemes are crucial to retrieve the correct åABS, which is commonly implemented in brown carbon contribution calculations. Additionally, we found that the wavelength dependence of uncompensated Aethalometer attenuation data significantly correlates with the åABS retrieved from offline MWAA measurements.
NASA Astrophysics Data System (ADS)
Joshi, K. D.; Marchant, T. E.; Moore, C. J.
2017-03-01
A shading correction algorithm for the improvement of cone-beam CT (CBCT) images (Phys. Med. Biol. 53 5719{33) has been further developed, optimised and validated extensively using 135 clinical CBCT images of patients undergoing radiotherapy treatment of the pelvis, lungs and head and neck. An automated technique has been developed to efficiently analyse the large number of clinical images. Small regions of similar tissue (for example fat tissue) are automatically identified using CT images. The same regions on the corresponding CBCT image are analysed to ensure that they do not contain pixels representing multiple types of tissue. The mean value of all selected pixels and the non-uniformity, defined as the median absolute deviation of the mean values in each small region, are calculated. Comparisons between CT and raw and corrected CBCT images are then made. Analysis of fat regions in pelvis images shows an average difference in mean pixel value between CT and CBCT of 136:0 HU in raw CBCT images, which is reduced to 2:0 HU after the application of the shading correction algorithm. The average difference in non-uniformity of fat pixels is reduced from 33:7 in raw CBCT to 2:8 in shading-corrected CBCT images. Similar results are obtained in the analysis of lung and head and neck images.
Feedback-related brain activity predicts learning from feedback in multiple-choice testing.
Ernst, Benjamin; Steinhauser, Marco
2012-06-01
Different event-related potentials (ERPs) have been shown to correlate with learning from feedback in decision-making tasks and with learning in explicit memory tasks. In the present study, we investigated which ERPs predict learning from corrective feedback in a multiple-choice test, which combines elements from both paradigms. Participants worked through sets of multiple-choice items of a Swahili-German vocabulary task. Whereas the initial presentation of an item required the participants to guess the answer, corrective feedback could be used to learn the correct response. Initial analyses revealed that corrective feedback elicited components related to reinforcement learning (FRN), as well as to explicit memory processing (P300) and attention (early frontal positivity). However, only the P300 and early frontal positivity were positively correlated with successful learning from corrective feedback, whereas the FRN was even larger when learning failed. These results suggest that learning from corrective feedback crucially relies on explicit memory processing and attentional orienting to corrective feedback, rather than on reinforcement learning.
Lockhart, M.; Henzlova, D.; Croft, S.; ...
2017-09-20
Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lockhart, M.; Henzlova, D.; Croft, S.
Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less
Seok, Ji-Woo; Sohn, Jin-Hun
2018-01-01
Neuroimaging studies on the characteristics of individuals with Internet gaming disorder (IGD) have been accumulating due to growing concerns regarding the psychological and social problems associated with Internet use. However, relatively little is known about the brain characteristics underlying IGD, such as the associated functional connectivity and structure. The aim of this study was to investigate alterations in gray matter (GM) volume and functional connectivity during resting state in individuals with IGD using voxel-based morphometry and a resting-state connectivity analysis. The participants included 20 individuals with IGD and 20 age- and sex-matched healthy controls. Resting-state functional and structural images were acquired for all participants using 3 T magnetic resonance imaging. We also measured the severity of IGD and impulsivity using psychological scales. The results show that IGD severity was positively correlated with GM volume in the left caudate (p < 0.05, corrected for multiple comparisons), and negatively associated with functional connectivity between the left caudate and the right middle frontal gyrus (p < 0.05, corrected for multiple comparisons). This study demonstrates that IGD is associated with neuroanatomical changes in the right middle frontal cortex and the left caudate. These are important brain regions for reward and cognitive control processes, and structural and functional abnormalities in these regions have been reported for other addictions, such as substance abuse and pathological gambling. The findings suggest that structural deficits and resting-state functional impairments in the frontostriatal network may be associated with IGD and provide new insights into the underlying neural mechanisms of IGD. PMID:29636704
Martin, Maureen V; Rollins, Brandi; Sequeira, P Adolfo; Mesén, Andrea; Byerley, William; Stein, Richard; Moon, Emily A; Akil, Huda; Jones, Edward G; Watson, Stanley J; Barchas, Jack; DeLisi, Lynn E; Myers, Richard M; Schatzberg, Alan; Bunney, William E; Vawter, Marquis P
2009-01-01
Background The purpose of this study was to examine the effects of glucose reduction stress on lymphoblastic cell line (LCL) gene expression in subjects with schizophrenia compared to non-psychotic relatives. Methods LCLs were grown under two glucose conditions to measure the effects of glucose reduction stress on exon expression in subjects with schizophrenia compared to unaffected family member controls. A second aim of this project was to identify cis-regulated transcripts associated with diagnosis. Results There were a total of 122 transcripts with significant diagnosis by probeset interaction effects and 328 transcripts with glucose deprivation by probeset interaction probeset effects after corrections for multiple comparisons. There were 8 transcripts with expression significantly affected by the interaction between diagnosis and glucose deprivation and probeset after correction for multiple comparisons. The overall validation rate by qPCR of 13 diagnosis effect genes identified through microarray was 62%, and all genes tested by qPCR showed concordant up- or down-regulation by qPCR and microarray. We assessed brain gene expression of five genes found to be altered by diagnosis and glucose deprivation in LCLs and found a significant decrease in expression of one gene, glutaminase, in the dorsolateral prefrontal cortex (DLPFC). One SNP with previously identified regulation by a 3' UTR SNP was found to influence IRF5 expression in both brain and lymphocytes. The relationship between the 3' UTR rs10954213 genotype and IRF5 expression was significant in LCLs (p = 0.0001), DLPFC (p = 0.007), and anterior cingulate cortex (p = 0.002). Conclusion Experimental manipulation of cells lines from subjects with schizophrenia may be a useful approach to explore stress related gene expression alterations in schizophrenia and to identify SNP variants associated with gene expression. PMID:19772658
Occupancy of striatal and extrastriatal dopamine D2/D3 receptors by olanzapine and haloperidol.
Kessler, Robert M; Ansari, Mohammad Sib; Riccardi, Patrizia; Li, Rui; Jayathilake, Karuna; Dawant, Benoit; Meltzer, Herbert Y
2005-12-01
There have been conflicting reports as to whether olanzapine produces lower occupancy of striatal dopamine D(2)/D(3) receptor than typical antipsychotic drugs and preferential occupancy of extrastriatal dopamine D(2)/D(3) receptors. We performed [(18)F] fallypride PET studies in six schizophrenic subjects treated with olanzapine and six schizophrenic subjects treated with haloperidol to examine the occupancy of striatal and extrastriatal dopamine receptors by these antipsychotic drugs. [(18)F] setoperone PET studies were performed in seven olanzapine-treated subjects to determine 5-HT(2A) receptor occupancy. Occupancy of dopamine D(2)/D(3) receptors by olanzapine was not significantly different from that seen with haloperidol in the putamen, ventral striatum, medial thalamus, amygdala, or temporal cortex, that is, 67.5-78.2% occupancy; olanzapine produced no preferential occupancy of dopamine D(2)/D(3) receptors in the ventral striatum, medial thalamus, amygdala, or temporal cortex. There was, however, significantly lower occupancy of substantia nigra/VTA dopamine D(2)/D(3) receptors in olanzapine-treated compared to haloperidol-treated subjects, that is, 40.2 vs 59.3% (p=0.0014, corrected for multiple comparisons); in olanzapine-treated subjects, the substantia nigra/VTA was the only region with significantly lower dopamine D(2)/D(3) receptor occupancy than the putamen, that is, 40.2 vs 69.2% (p<0.001, corrected for multiple comparison). Occupancy of 5-HT(2A) receptors was 85-93% in the olanzapine- treated subjects. The results of this study demonstrated that olanzapine does not produce preferential occupancy of extrastriatal dopamine D(2)/D(3) receptors but does spare substantia nigra/VTA receptors. Sparing of substantia nigra/VTA dopamine D(2)/D(3) receptor occupancy may contribute to the low incidence of extrapyramidal side effects in olanzapine-treated patients.
Gray-matter volume, midbrain dopamine D2/D3 receptors and drug craving in methamphetamine users.
Morales, A M; Kohno, M; Robertson, C L; Dean, A C; Mandelkern, M A; London, E D
2015-06-01
Dysfunction of the mesocorticolimbic system has a critical role in clinical features of addiction. Despite evidence suggesting that midbrain dopamine receptors influence amphetamine-induced dopamine release and that dopamine is involved in methamphetamine-induced neurotoxicity, associations between dopamine receptors and gray-matter volume have been unexplored in methamphetamine users. Here we used magnetic resonance imaging and [(18)F]fallypride positron emission tomography, respectively, to measure gray-matter volume (in 58 methamphetamine users) and dopamine D2/D3 receptor availability (binding potential relative to nondisplaceable uptake of the radiotracer, BPnd) (in 31 methamphetamine users and 37 control participants). Relationships between these measures and self-reported drug craving were examined. Although no difference in midbrain D2/D3 BPnd was detected between methamphetamine and control groups, midbrain D2/D3 BPnd was positively correlated with gray-matter volume in the striatum, prefrontal cortex, insula, hippocampus and temporal cortex in methamphetamine users, but not in control participants (group-by-midbrain D2/D3 BPnd interaction, P<0.05 corrected for multiple comparisons). Craving for methamphetamine was negatively associated with gray-matter volume in the insula, prefrontal cortex, amygdala, temporal cortex, occipital cortex, cerebellum and thalamus (P<0.05 corrected for multiple comparisons). A relationship between midbrain D2/D3 BPnd and methamphetamine craving was not detected. Lower midbrain D2/D3 BPnd may increase vulnerability to deficits in gray-matter volume in mesocorticolimbic circuitry in methamphetamine users, possibly reflecting greater dopamine-induced toxicity. Identifying factors that influence prefrontal and limbic volume, such as midbrain BPnd, may be important for understanding the basis of drug craving, a key factor in the maintenance of substance-use disorders.
Gray-Matter Volume, Midbrain Dopamine D2/D3 Receptors and Drug Craving in Methamphetamine Users
Morales, Angelica A.; Kohno, Milky; Robertson, Chelsea L.; Dean, Andy C.; Mandelkern, Mark A.; London, Edythe D.
2015-01-01
Dysfunction of the mesocorticolimbic system plays a critical role in clinical features of addiction. Despite evidence suggesting that midbrain dopamine receptors influence amphetamine-induced dopamine release and that dopamine is involved in methamphetamine-induced neurotoxicity, associations between dopamine receptors and gray-matter volume have been unexplored in methamphetamine users. Here we used magnetic resonance imaging and [18F]fallypride positron emission tomography, respectively, to measure gray-matter volume (in 58 methamphetamine users) and dopamine D2/D3 receptor availability (binding potential relative to nondisplaceable uptake of the radiotracer, BPnd) (in 31 methamphetamine users and 37 control participants). Relationships between these measures and self-reported drug craving were examined. Although no difference in midbrain D2/D3 BPnd was detected between methamphetamine and control groups, midbrain D2/D3 BPnd was positively correlated with gray-matter volume in the striatum, prefrontal cortex, insula, hippocampus and temporal cortex in methamphetamine users, but not in control participants (group-by-midbrain D2/D3 BPnd interaction, p<0.05 corrected for multiple comparisons). Craving for methamphetamine was negatively associated with gray-matter volume in the insula, prefrontal cortex, amygdala, temporal cortex, occipital cortex, cerebellum, and thalamus (p<0.05 corrected for multiple comparisons). A relationship between midbrain D2/D3 BPnd and methamphetamine craving was not detected. Lower midbrain D2/D3 BPnd may increase vulnerability to deficits in gray-matter volume in mesocorticolimbic circuitry in methamphetamine users, possibly reflecting greater dopamine-induced toxicity. Identifying factors that influence prefrontal and limbic volume, such as midbrain BPnd, may be important for understanding the basis of drug craving, a key factor in the maintenance of substance use disorders. PMID:25896164
Naaijen, J; Bralten, J; Poelmans, G; Glennon, J C; Franke, B; Buitelaar, J K
2017-01-10
Attention-deficit/hyperactivity disorder (ADHD) and autism spectrum disorders (ASD) often co-occur. Both are highly heritable; however, it has been difficult to discover genetic risk variants. Glutamate and GABA are main excitatory and inhibitory neurotransmitters in the brain; their balance is essential for proper brain development and functioning. In this study we investigated the role of glutamate and GABA genetics in ADHD severity, autism symptom severity and inhibitory performance, based on gene set analysis, an approach to investigate multiple genetic variants simultaneously. Common variants within glutamatergic and GABAergic genes were investigated using the MAGMA software in an ADHD case-only sample (n=931), in which we assessed ASD symptoms and response inhibition on a Stop task. Gene set analysis for ADHD symptom severity, divided into inattention and hyperactivity/impulsivity symptoms, autism symptom severity and inhibition were performed using principal component regression analyses. Subsequently, gene-wide association analyses were performed. The glutamate gene set showed an association with severity of hyperactivity/impulsivity (P=0.009), which was robust to correcting for genome-wide association levels. The GABA gene set showed nominally significant association with inhibition (P=0.04), but this did not survive correction for multiple comparisons. None of single gene or single variant associations was significant on their own. By analyzing multiple genetic variants within candidate gene sets together, we were able to find genetic associations supporting the involvement of excitatory and inhibitory neurotransmitter systems in ADHD and ASD symptom severity in ADHD.
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Bar-Itzhack, Itzhack Y.; Rokni, Mohammad
1990-01-01
The testing and comparison of two Extended Kalman Filters (EKFs) developed for the Earth Radiation Budget Satellite (ERBS) is described. One EKF updates the attitude quaternion using a four component additive error quaternion. This technique is compared to that of a second EKF, which uses a multiplicative error quaternion. A brief development of the multiplicative algorithm is included. The mathematical development of the additive EKF was presented in the 1989 Flight Mechanics/Estimation Theory Symposium along with some preliminary testing results using real spacecraft data. A summary of the additive EKF algorithm is included. The convergence properties, singularity problems, and normalization techniques of the two filters are addressed. Both filters are also compared to those from the ERBS operational ground support software, which uses a batch differential correction algorithm to estimate attitude and gyro biases. Sensitivity studies are performed on the estimation of sensor calibration states. The potential application of the EKF for real time and non-real time ground attitude determination and sensor calibration for future missions such as the Gamma Ray Observatory (GRO) and the Small Explorer Mission (SMEX) is also presented.
Initial Correction versus Negative Marking in Multiple Choice Examinations
ERIC Educational Resources Information Center
Van Hecke, Tanja
2015-01-01
Optimal assessment tools should measure in a limited time the knowledge of students in a correct and unbiased way. A method for automating the scoring is multiple choice scoring. This article compares scoring methods from a probabilistic point of view by modelling the probability to pass: the number right scoring, the initial correction (IC) and…
Shifflett, Benjamin; Huang, Rong; Edland, Steven D
2017-01-01
Genotypic association studies are prone to inflated type I error rates if multiple hypothesis testing is performed, e.g., sequentially testing for recessive, multiplicative, and dominant risk. Alternatives to multiple hypothesis testing include the model independent genotypic χ 2 test, the efficiency robust MAX statistic, which corrects for multiple comparisons but with some loss of power, or a single Armitage test for multiplicative trend, which has optimal power when the multiplicative model holds but with some loss of power when dominant or recessive models underlie the genetic association. We used Monte Carlo simulations to describe the relative performance of these three approaches under a range of scenarios. All three approaches maintained their nominal type I error rates. The genotypic χ 2 and MAX statistics were more powerful when testing a strictly recessive genetic effect or when testing a dominant effect when the allele frequency was high. The Armitage test for multiplicative trend was most powerful for the broad range of scenarios where heterozygote risk is intermediate between recessive and dominant risk. Moreover, all tests had limited power to detect recessive genetic risk unless the sample size was large, and conversely all tests were relatively well powered to detect dominant risk. Taken together, these results suggest the general utility of the multiplicative trend test when the underlying genetic model is unknown.
NASA Astrophysics Data System (ADS)
Passow, Christian; Donner, Reik
2017-04-01
Quantile mapping (QM) is an established concept that allows to correct systematic biases in multiple quantiles of the distribution of a climatic observable. It shows remarkable results in correcting biases in historical simulations through observational data and outperforms simpler correction methods which relate only to the mean or variance. Since it has been shown that bias correction of future predictions or scenario runs with basic QM can result in misleading trends in the projection, adjusted, trend preserving, versions of QM were introduced in the form of detrended quantile mapping (DQM) and quantile delta mapping (QDM) (Cannon, 2015, 2016). Still, all previous versions and applications of QM based bias correction rely on the assumption of time-independent quantiles over the investigated period, which can be misleading in the context of a changing climate. Here, we propose a novel combination of linear quantile regression (QR) with the classical QM method to introduce a consistent, time-dependent and trend preserving approach of bias correction for historical and future projections. Since QR is a regression method, it is possible to estimate quantiles in the same resolution as the given data and include trends or other dependencies. We demonstrate the performance of the new method of linear regression quantile mapping (RQM) in correcting biases of temperature and precipitation products from historical runs (1959 - 2005) of the COSMO model in climate mode (CCLM) from the Euro-CORDEX ensemble relative to gridded E-OBS data of the same spatial and temporal resolution. A thorough comparison with established bias correction methods highlights the strengths and potential weaknesses of the new RQM approach. References: A.J. Cannon, S.R. Sorbie, T.Q. Murdock: Bias Correction of GCM Precipitation by Quantile Mapping - How Well Do Methods Preserve Changes in Quantiles and Extremes? Journal of Climate, 28, 6038, 2015 A.J. Cannon: Multivariate Bias Correction of Climate Model Outputs - Matching Marginal Distributions and Inter-variable Dependence Structure. Journal of Climate, 29, 7045, 2016
Yan, Chao; Wang, Yi; Su, Li; Xu, Ting; Yin, Da-Zhi; Fan, Ming-Xia; Deng, Ci-Ping; Wang, Zhao-Xin; Lui, Simon S Y; Cheung, Eric F C; Chan, Raymond C K
2016-08-30
Schizotypy is associated with anhedonia. However, previous findings on the neural substrates of anhedonia in schizotypy are mixed. In the present study, we measured the neural substrates associated with reward anticipation and consummation in positive and negative schizotypy using functional MRI. The Monetary Incentive Delay task was administered to 33 individuals with schizotypy (18 positive schizotypy (PS),15 negative schizotypy (NS)) and 22 healthy controls. Comparison between schizotypy individuals and controls were performed using two-sample T tests for contrast images involving gain versus non-gain anticipation condition and gain versus non-gain consummation condition. Multiple comparisons were corrected using Monte Carlo Simulation correction of p<.05. The results showed no significant difference in brain activity between controls and schizotypy individuals as a whole during gain anticipation or consummation. However, during the consummatory phase, NS individuals rather than PS individuals showed diminished left amygdala and left putamen activity compared with controls. We observed significantly weaker activation at the left ventral striatum during gain anticipation in NS individuals compared with controls. PS individuals, however, exhibited enhanced right ventral lateral prefrontal activity. These findings suggest that different dimensions of schizotypy may be underlied by different neural dysfunctions in reward anticipation and consummation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Relationship between Brain Age-Related Reduction in Gray Matter and Educational Attainment
Rzezak, Patricia; Squarzoni, Paula; Duran, Fabio L.; de Toledo Ferraz Alves, Tania; Tamashiro-Duran, Jaqueline; Bottino, Cassio M.; Ribeiz, Salma; Lotufo, Paulo A.; Menezes, Paulo R.; Scazufca, Marcia; Busatto, Geraldo F.
2015-01-01
Inter-subject variability in age-related brain changes may relate to educational attainment, as suggested by cognitive reserve theories. This voxel-based morphometry study investigated the impact of very low educational level on the relationship between regional gray matter (rGM) volumes and age in healthy elders. Magnetic resonance imaging data were acquired in elders with low educational attainment (less than 4 years) (n = 122) and high educational level (n = 66), pulling together individuals examined using either of three MRI scanners/acquisition protocols. Voxelwise group comparisons showed no rGM differences (p<0.05, family-wise error corrected for multiple comparisons). When within-group voxelwise patterns of linear correlation were compared between high and low education groups, there was one cluster of greater rGM loss with aging in low versus high education elders in the left anterior cingulate cortex (p<0.05, FWE-corrected), as well as a trend in the left dorsomedial prefrontal cortex (p<0.10). These results provide preliminary indication that education might exert subtle protective effects against age-related brain changes in healthy subjects. The anterior cingulate cortex, critical to inhibitory control processes, may be particularly sensitive to such effects, possibly given its involvement in cognitive stimulating activities at school or later throughout life. PMID:26474472
Odegard, Timothy N; Koen, Joshua D
2007-11-01
Both positive and negative testing effects have been demonstrated with a variety of materials and paradigms (Roediger & Karpicke, 2006b). The present series of experiments replicate and extend the research of Roediger and Marsh (2005) with the addition of a "none-of-the-above" response option. Participants (n=32 in both experiments) read a set of passages, took an initial multiple-choice test, completed a filler task, and then completed a final cued-recall test (Experiment 1) or multiple-choice test (Experiment 2). Questions were manipulated on the initial multiple-choice test by adding a "none-of-the-above" response alternative (choice "E") that was incorrect ("E" Incorrect) or correct ("E" Correct). The results from both experiments demonstrated that the positive testing effect was negated when the "none-of-the-above" alternative was the correct response on the initial multiple-choice test, but was still present when the "none-of-the-above" alternative was an incorrect response.
NASA Astrophysics Data System (ADS)
Zink, Frank Edward
The detection and classification of pulmonary nodules is of great interest in chest radiography. Nodules are often indicative of primary cancer, and their detection is particularly important in asymptomatic patients. The ability to classify nodules as calcified or non-calcified is important because calcification is a positive indicator that the nodule is benign. Dual-energy methods offer the potential to improve both the detection and classification of nodules by allowing the formation of material-selective images. Tissue-selective images can improve detection by virtue of the elimination of obscuring rib structure. Bone -selective images are essentially calcium images, allowing classification of the nodule. A dual-energy technique is introduced which uses a computed radiography system to acquire dual-energy chest radiographs in a single-exposure. All aspects of the dual-energy technique are described, with particular emphasis on scatter-correction, beam-hardening correction, and noise-reduction algorithms. The adaptive noise-reduction algorithm employed improves material-selective signal-to-noise ratio by up to a factor of seven with minimal sacrifice in selectivity. A clinical comparison study is described, undertaken to compare the dual-energy technique to conventional chest radiography for the tasks of nodule detection and classification. Observer performance data were collected using the Free Response Observer Characteristic (FROC) method and the bi-normal Alternative FROC (AFROC) performance model. Results of the comparison study, analyzed using two common multiple observer statistical models, showed that the dual-energy technique was superior to conventional chest radiography for detection of nodules at a statistically significant level (p < .05). Discussion of the comparison study emphasizes the unique combination of data collection and analysis techniques employed, as well as the limitations of comparison techniques in the larger context of technology assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne
Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less
Chang, Fong-Ching; Chi, Hsueh-Yun; Huang, Li-Jung; Lee, Chun-Hsien; Yang, Jyun-Long; Yeh, Ming-Kung
2015-01-01
To evaluate the effectiveness of the health promoting school (HPS)-community pharmacist partnership program that promotes students' correct medication use and enhances pain medication literacy in Taiwan. Pre- and post-studies and intervention/comparison group comparisons. Primary and middle schools, along with their communities, in Taiwan. In 2013, baseline and follow-up self-administered, online surveys were received from 5,373 students enrolled in intervention primary and middle schools and from 4,643 students enrolled in comparison primary and middle schools. The level of medication literacy, including correct medication use knowledge, self-efficacy, and skills. The development and implementation of the HPS-community pharmacist partnership program in primary and middle schools significantly enhanced students' knowledge, self-efficacy, and skills in correct medication use and pain medication literacy (P <0.001). The HPS-community pharmacist partnership had a positive impact on enhancing correct medication use and pain medication literacy in Taiwan.
ERIC Educational Resources Information Center
Tonisson, Eno; Lepp, Marina
2015-01-01
The answers offered by computer algebra systems (CAS) can sometimes differ from those expected by the students or teachers. The comparison of the students' answers and CAS answers could provide ground for discussion about equivalence and correctness. Investigating the students' comparison of the answers gives the possibility to study different…
NASA Astrophysics Data System (ADS)
Gu, Yanchao; Fan, Dongming; You, Wei
2017-07-01
Eleven GPS crustal vertical displacement (CVD) solutions for 110 IGS08/IGS14 core stations provided by the International Global Navigation Satellite Systems Service Analysis Centers are compared with seven Gravity Recovery and Climate Experiment (GRACE)-modeled CVD solutions. The results of the internal comparison of the GPS solutions from multiple institutions imply large uncertainty in the GPS postprocessing. There is also evidence that GRACE solutions from both different institutions and different processing approaches (mascon and traditional spherical harmonic coefficients) show similar results, suggesting that GRACE can provide CVD results of good internal consistency. When the uncertainty of the GPS data is accounted for, the GRACE data can explain as much as 50% of the actual signals and more than 80% of the GPS annual signals. Our study strongly indicates that GRACE data have great potential to correct the nontidal loading in GPS time series.
Khachatryan, Vardan
2015-10-20
In this study, a comparison of the differential cross sections for the processes Z/γ * + jets and photon (γ)+jets is presented. The measurements are based on data collected with the CMS detector at √s = 8 TeV corresponding to an integrated luminosity of 19.7 fb –1. The differential cross sections and their ratios are presented as functions of p T. The measurements are also shown as functions of the jet multiplicity. Differential cross sections are obtained as functions of the ratio of the Z/γ* p T to the sum of all jet transverse momenta and of the ratio ofmore » the Z/γ* p T to the leading jet transverse momentum. The data are corrected for detector effects and are compared to simulations based on several QCD calculations.« less
MODIS-VIIRS Intercalibration for Dark Target Aerosol Retrieval Over Ocean
NASA Astrophysics Data System (ADS)
Sawyer, V. R.; Levy, R. C.; Mattoo, S.; Quinn, G.; Veglio, P.
2016-12-01
Any future climate record for satellite aerosol retrieval will require continuity over multiple decades, longer than the lifespan of an individual satellite instrument. The Dark Target algorithm was developed for MODIS, which began taking observations in 1999; the two MODIS instruments currently in orbit are not expected to continue taking observations beyond the early 2020s. However, the algorithm is portable, and a Dark Target product for VIIRS is scheduled for release December 2016. Because MODIS and VIIRS operate at different wavelengths, resolutions, fields of view and orbital timing, the transition can introduce artifacts that must be corrected. Without these corrections, it will be difficult to find any changes that may occur in the global aerosol climate record over time periods that span the transition from MODIS to VIIRS retrievals. The University of Wisconsin-Madison SIPS team found thousands of matches between 2012 and 2016 in which Aqua-MODIS and Suomi-NPP VIIRS observe the same location at similar times and view angles. These matched cases are used to identify corresponding matches in the Intermediate File Format (IFF) aerosol retrievals for MODIS and VIIRS, which are compared to one another in turn. Because most known sources of disagreement between the two instruments have already been corrected during the IFF retrieval, the direct comparison between near-collocated cases shows only the differences that remain at local and regional scales. The comparison is further restricted to clear-sky cases over ocean, so that the investigation of seasonal, diurnal and geographic variation is not affected by uncertainties in the land surface or cloud contamination.
Han, Buhm; Kang, Hyun Min; Eskin, Eleazar
2009-01-01
With the development of high-throughput sequencing and genotyping technologies, the number of markers collected in genetic association studies is growing rapidly, increasing the importance of methods for correcting for multiple hypothesis testing. The permutation test is widely considered the gold standard for accurate multiple testing correction, but it is often computationally impractical for these large datasets. Recently, several studies proposed efficient alternative approaches to the permutation test based on the multivariate normal distribution (MVN). However, they cannot accurately correct for multiple testing in genome-wide association studies for two reasons. First, these methods require partitioning of the genome into many disjoint blocks and ignore all correlations between markers from different blocks. Second, the true null distribution of the test statistic often fails to follow the asymptotic distribution at the tails of the distribution. We propose an accurate and efficient method for multiple testing correction in genome-wide association studies—SLIDE. Our method accounts for all correlation within a sliding window and corrects for the departure of the true null distribution of the statistic from the asymptotic distribution. In simulations using the Wellcome Trust Case Control Consortium data, the error rate of SLIDE's corrected p-values is more than 20 times smaller than the error rate of the previous MVN-based methods' corrected p-values, while SLIDE is orders of magnitude faster than the permutation test and other competing methods. We also extend the MVN framework to the problem of estimating the statistical power of an association study with correlated markers and propose an efficient and accurate power estimation method SLIP. SLIP and SLIDE are available at http://slide.cs.ucla.edu. PMID:19381255
Huh, Yeamin; Smith, David E.; Feng, Meihau Rose
2014-01-01
Human clearance prediction for small- and macro-molecule drugs was evaluated and compared using various scaling methods and statistical analysis.Human clearance is generally well predicted using single or multiple species simple allometry for macro- and small-molecule drugs excreted renally.The prediction error is higher for hepatically eliminated small-molecules using single or multiple species simple allometry scaling, and it appears that the prediction error is mainly associated with drugs with low hepatic extraction ratio (Eh). The error in human clearance prediction for hepatically eliminated small-molecules was reduced using scaling methods with a correction of maximum life span (MLP) or brain weight (BRW).Human clearance of both small- and macro-molecule drugs is well predicted using the monkey liver blood flow method. Predictions using liver blood flow from other species did not work as well, especially for the small-molecule drugs. PMID:21892879
Valence bond and von Neumann entanglement entropy in Heisenberg ladders.
Kallin, Ann B; González, Iván; Hastings, Matthew B; Melko, Roger G
2009-09-11
We present a direct comparison of the recently proposed valence bond entanglement entropy and the von Neumann entanglement entropy on spin-1/2 Heisenberg systems using quantum Monte Carlo and density-matrix renormalization group simulations. For one-dimensional chains we show that the valence bond entropy can be either less or greater than the von Neumann entropy; hence, it cannot provide a bound on the latter. On ladder geometries, simulations with up to seven legs are sufficient to indicate that the von Neumann entropy in two dimensions obeys an area law, even though the valence bond entanglement entropy has a multiplicative logarithmic correction.
Kim, S W; Hoover, K M
1996-02-01
We administered the Tridimensional Personality Questionnaire to 40 control subjects and to 47 social phobia patients who met Structured Clinical Interview for DSM-III-R (SCID) criteria for social phobia and participated in a multicenter treatment study. Multiple comparisons with Bonferroni correction showed a significant increase in total Harm Avoidance scale scores and all four subscale scores for the social phobia group. On a Reward Dependence subscale that measures persistence versus irresoluteness the mean was significantly lower in the social phobia group than the control group. Present findings extend an earlier report of increased Harm Avoidance in major depressive disorder and other clinical diagnostic groups.
Risk attitudes and birth order.
Krause, Philipp; Heindl, Johannes; Jung, Andreas; Langguth, Berthold; Hajak, Göran; Sand, Philipp G
2014-07-01
Risk attitudes play important roles in health behavior and everyday decision making. It is unclear, however, whether these attitudes can be predicted from birth order. We investigated 200 mostly male volunteers from two distinct settings. After correcting for multiple comparisons, for the number of siblings and for confounding by gender, ordinal position predicted perception of health-related risks among participants in extreme sports (p < .01). However, the direction of the effect contradicted Adlerian theory. Except for alcohol consumption, these findings extended to self-reported risk behavior. Together, the data call for a cautious stand on the impact of birth order on risk attitudes. © The Author(s) 2013.
Effects of preprocessing Landsat MSS data on derived features
NASA Technical Reports Server (NTRS)
Parris, T. M.; Cicone, R. C.
1983-01-01
Important to the use of multitemporal Landsat MSS data for earth resources monitoring, such as agricultural inventories, is the ability to minimize the effects of varying atmospheric and satellite viewing conditions, while extracting physically meaningful features from the data. In general, the approaches to the preprocessing problem have been derived from either physical or statistical models. This paper compares three proposed algorithms; XSTAR haze correction, Color Normalization, and Multiple Acquisition Mean Level Adjustment. These techniques represent physical, statistical, and hybrid physical-statistical models, respectively. The comparisons are made in the context of three feature extraction techniques; the Tasseled Cap, the Cate Color Cube. and Normalized Difference.
Daniels, Benjamin; Dolinger, Amy; Bedoya, Guadalupe; Rogo, Khama; Goicoechea, Ana; Coarasa, Jorge; Wafula, Francis; Mwaura, Njeri; Kimeu, Redemptar; Das, Jishnu
2017-01-01
The quality of clinical care can be reliably measured in multiple settings using standardised patients (SPs), but this methodology has not been extensively used in Sub-Saharan Africa. This study validates the use of SPs for a variety of tracer conditions in Nairobi, Kenya, and provides new results on the quality of care in sampled primary care clinics. We deployed 14 SPs in private and public clinics presenting either asthma, child diarrhoea, tuberculosis or unstable angina. Case management guidelines and checklists were jointly developed with the Ministry of Health. We validated the SP method based on the ability of SPs to avoid detection or dangerous situations, without imposing a substantial time burden on providers. We also evaluated the sensitivity of quality measures to SP characteristics. We assessed quality of practice through adherence to guidelines and checklists for the entire sample, stratified by case and stratified by sector, and in comparison with previously published results from urban India, rural India and rural China. Across 166 interactions in 42 facilities, detection rates and exposure to unsafe conditions were both zero. There were no detected outcome correlations with SP characteristics that would bias the results. Across all four conditions, 53% of SPs were correctly managed with wide variation across tracer conditions. SPs paid 76% less in public clinics, but proportions of correct management were similar to private clinics for three conditions and higher for the fourth. Kenyan outcomes compared favourably with India and China in all but the angina case. The SP method is safe and effective in the urban Kenyan setting for the assessment of clinical practice. The pilot results suggest that public providers in this setting provide similar rates of correct management to private providers at significantly lower out-of-pocket costs for patients. However, comparisons across countries are sensitive to the tracer condition considered.
Daniels, Benjamin; Dolinger, Amy; Bedoya, Guadalupe; Rogo, Khama; Goicoechea, Ana; Coarasa, Jorge; Wafula, Francis; Mwaura, Njeri; Kimeu, Redemptar
2017-01-01
Introduction The quality of clinical care can be reliably measured in multiple settings using standardised patients (SPs), but this methodology has not been extensively used in Sub-Saharan Africa. This study validates the use of SPs for a variety of tracer conditions in Nairobi, Kenya, and provides new results on the quality of care in sampled primary care clinics. Methods We deployed 14 SPs in private and public clinics presenting either asthma, child diarrhoea, tuberculosis or unstable angina. Case management guidelines and checklists were jointly developed with the Ministry of Health. We validated the SP method based on the ability of SPs to avoid detection or dangerous situations, without imposing a substantial time burden on providers. We also evaluated the sensitivity of quality measures to SP characteristics. We assessed quality of practice through adherence to guidelines and checklists for the entire sample, stratified by case and stratified by sector, and in comparison with previously published results from urban India, rural India and rural China. Results Across 166 interactions in 42 facilities, detection rates and exposure to unsafe conditions were both zero. There were no detected outcome correlations with SP characteristics that would bias the results. Across all four conditions, 53% of SPs were correctly managed with wide variation across tracer conditions. SPs paid 76% less in public clinics, but proportions of correct management were similar to private clinics for three conditions and higher for the fourth. Kenyan outcomes compared favourably with India and China in all but the angina case. Conclusions The SP method is safe and effective in the urban Kenyan setting for the assessment of clinical practice. The pilot results suggest that public providers in this setting provide similar rates of correct management to private providers at significantly lower out-of-pocket costs for patients. However, comparisons across countries are sensitive to the tracer condition considered. PMID:29225937
Comparison of answer-until-correct and full-credit assessments in a team-based learning course.
Farland, Michelle Z; Barlow, Patrick B; Levi Lancaster, T; Franks, Andrea S
2015-03-25
To assess the impact of awarding partial credit to team assessments on team performance and on quality of team interactions using an answer-until-correct method compared to traditional methods of grading (multiple-choice, full-credit). Subjects were students from 3 different offerings of an ambulatory care elective course, taught using team-based learning. The control group (full-credit) consisted of those enrolled in the course when traditional methods of assessment were used (2 course offerings). The intervention group consisted of those enrolled in the course when answer-until-correct method was used for team assessments (1 course offering). Study outcomes included student performance on individual and team readiness assurance tests (iRATs and tRATs), individual and team final examinations, and student assessment of quality of team interactions using the Team Performance Scale. Eighty-four students enrolled in the courses were included in the analysis (full-credit, n=54; answer-until-correct, n=30). Students who used traditional methods of assessment performed better on iRATs (full-credit mean 88.7 (5.9), answer-until-correct mean 82.8 (10.7), p<0.001). Students who used answer-until-correct method of assessment performed better on the team final examination (full-credit mean 45.8 (1.5), answer-until-correct 47.8 (1.4), p<0.001). There was no significant difference in performance on tRATs and the individual final examination. Students who used the answer-until-correct method had higher quality of team interaction ratings (full-credit 97.1 (9.1), answer-until-correct 103.0 (7.8), p=0.004). Answer-until-correct assessment method compared to traditional, full-credit methods resulted in significantly lower scores for iRATs, similar scores on tRATs and individual final examinations, improved scores on team final examinations, and improved perceptions of the quality of team interactions.
NASA Astrophysics Data System (ADS)
Sun, Phillip Z.; Zhou, Iris Y.; Igarashi, Takahiro; Guo, Yingkun; Xiao, Gang; Wu, Renhua
2015-03-01
Chemical exchange saturation transfer (CEST) MRI is sensitive to dilute exchangeable protons and local properties such as pH and temperate, yet its susceptibility to field inhomogeneity limits its in vivo applications. Particularly, CEST measurement varies with RF irradiation power, the dependence of which is complex due to concomitant direct RF saturation (RF spillover) effect. Because the volume transmitters provide relatively homogeneous RF field, they have been conventionally used for CEST imaging despite of their elevated specific absorption rate (SAR) and relatively low sensitivity than surface coils. To address this limitation, we developed an efficient B1 inhomogeneity correction algorithm that enables CEST MRI using surface transceiver coils. This is built on recent work that showed the inverse CEST asymmetry analysis (CESTRind) is not susceptible to confounding RF spillover effect. We here postulated that the linear relationship between RF power level and CESTRind can be extended for correcting B1 inhomogeneity induced CEST MRI artifacts. Briefly, we prepared a tissue-like Creatine gel pH phantom and collected multiparametric MRI including relaxation, field map and CEST MRI under multiple RF power levels, using a conventional surface transceiver coil. The raw CEST images showed substantial heterogeneity due to B1 inhomogeneity, with pH contrast to noise ratio (CNR) being 8.8. In comparison, pH MRI CNR of the fieldinhomogeneity corrected CEST MRI was found to be 17.2, substantially higher than that without correction. To summarize, our study validated an efficient field inhomogeneity correction that enables sensitive CEST MRI with surface transceiver, promising for in vivo translation.
Statistical technique for analysing functional connectivity of multiple spike trains.
Masud, Mohammad Shahed; Borisyuk, Roman
2011-03-15
A new statistical technique, the Cox method, used for analysing functional connectivity of simultaneously recorded multiple spike trains is presented. This method is based on the theory of modulated renewal processes and it estimates a vector of influence strengths from multiple spike trains (called reference trains) to the selected (target) spike train. Selecting another target spike train and repeating the calculation of the influence strengths from the reference spike trains enables researchers to find all functional connections among multiple spike trains. In order to study functional connectivity an "influence function" is identified. This function recognises the specificity of neuronal interactions and reflects the dynamics of postsynaptic potential. In comparison to existing techniques, the Cox method has the following advantages: it does not use bins (binless method); it is applicable to cases where the sample size is small; it is sufficiently sensitive such that it estimates weak influences; it supports the simultaneous analysis of multiple influences; it is able to identify a correct connectivity scheme in difficult cases of "common source" or "indirect" connectivity. The Cox method has been thoroughly tested using multiple sets of data generated by the neural network model of the leaky integrate and fire neurons with a prescribed architecture of connections. The results suggest that this method is highly successful for analysing functional connectivity of simultaneously recorded multiple spike trains. Copyright © 2011 Elsevier B.V. All rights reserved.
Gutowski, Stanley J; Stromer, Robert
2003-01-01
Delayed matching to complex, two-picture samples (e.g., cat-dog) may be improved when the samples occasion differential verbal behavior. In Experiment 1, individuals with mental retardation matched picture comparisons to identical single-picture samples or to two-picture samples, one of which was identical to a comparison. Accuracy scores were typically high on single-picture trials under both simultaneous and delayed matching conditions. Scores on two-picture trials were also high during the simultaneous condition but were lower during the delay condition. However, scores improved on delayed two-picture trials when each of the sample pictures was named aloud before comparison responding. Experiment 2 replicated these results with preschoolers with typical development and a youth with mental retardation. Sample naming also improved the preschoolers' matching when the samples were pairs of spoken names and the correct comparison picture matched one of the names. Collectively, the participants could produce the verbal behavior that might have improved performance, but typically did not do so unless the procedure required it. The success of the naming intervention recommends it for improving the observing and remembering of multiple elements of complex instructional stimuli.
Separating stages of arithmetic verification: An ERP study with a novel paradigm.
Avancini, Chiara; Soltész, Fruzsina; Szűcs, Dénes
2015-08-01
In studies of arithmetic verification, participants typically encounter two operands and they carry out an operation on these (e.g. adding them). Operands are followed by a proposed answer and participants decide whether this answer is correct or incorrect. However, interpretation of results is difficult because multiple parallel, temporally overlapping numerical and non-numerical processes of the human brain may contribute to task execution. In order to overcome this problem here we used a novel paradigm specifically designed to tease apart the overlapping cognitive processes active during arithmetic verification. Specifically, we aimed to separate effects related to detection of arithmetic correctness, detection of the violation of strategic expectations, detection of physical stimulus properties mismatch and numerical magnitude comparison (numerical distance effects). Arithmetic correctness, physical stimulus properties and magnitude information were not task-relevant properties of the stimuli. We distinguished between a series of temporally highly overlapping cognitive processes which in turn elicited overlapping ERP effects with distinct scalp topographies. We suggest that arithmetic verification relies on two major temporal phases which include parallel running processes. Our paradigm offers a new method for investigating specific arithmetic verification processes in detail. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Vila, Daniel; deGoncalves, Luis Gustavo; Toll, David L.; Rozante, Jose Roberto
2008-01-01
This paper describes a comprehensive assessment of a new high-resolution, high-quality gauge-satellite based analysis of daily precipitation over continental South America during 2004. This methodology is based on a combination of additive and multiplicative bias correction schemes in order to get the lowest bias when compared with the observed values. Inter-comparisons and cross-validations tests have been carried out for the control algorithm (TMPA real-time algorithm) and different merging schemes: additive bias correction (ADD), ratio bias correction (RAT) and TMPA research version, for different months belonging to different seasons and for different network densities. All compared merging schemes produce better results than the control algorithm, but when finer temporal (daily) and spatial scale (regional networks) gauge datasets is included in the analysis, the improvement is remarkable. The Combined Scheme (CoSch) presents consistently the best performance among the five techniques. This is also true when a degraded daily gauge network is used instead of full dataset. This technique appears a suitable tool to produce real-time, high-resolution, high-quality gauge-satellite based analyses of daily precipitation over land in regional domains.
Improved determination of particulate absorption from combined filter pad and PSICAM measurements.
Lefering, Ina; Röttgers, Rüdiger; Weeks, Rebecca; Connor, Derek; Utschig, Christian; Heymann, Kerstin; McKee, David
2016-10-31
Filter pad light absorption measurements are subject to two major sources of experimental uncertainty: the so-called pathlength amplification factor, β, and scattering offsets, o, for which previous null-correction approaches are limited by recent observations of non-zero absorption in the near infrared (NIR). A new filter pad absorption correction method is presented here which uses linear regression against point-source integrating cavity absorption meter (PSICAM) absorption data to simultaneously resolve both β and the scattering offset. The PSICAM has previously been shown to provide accurate absorption data, even in highly scattering waters. Comparisons of PSICAM and filter pad particulate absorption data reveal linear relationships that vary on a sample by sample basis. This regression approach provides significantly improved agreement with PSICAM data (3.2% RMS%E) than previously published filter pad absorption corrections. Results show that direct transmittance (T-method) filter pad absorption measurements perform effectively at the same level as more complex geometrical configurations based on integrating cavity measurements (IS-method and QFT-ICAM) because the linear regression correction compensates for the sensitivity to scattering errors in the T-method. This approach produces accurate filter pad particulate absorption data for wavelengths in the blue/UV and in the NIR where sensitivity issues with PSICAM measurements limit performance. The combination of the filter pad absorption and PSICAM is therefore recommended for generating full spectral, best quality particulate absorption data as it enables correction of multiple errors sources across both measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiandra, Christian; Fusella, Marco; Filippi, Andrea Riccardo
2013-08-15
Purpose: Patient-specific quality assurance in volumetric modulated arc therapy (VMAT) brain stereotactic radiosurgery raises specific issues on dosimetric procedures, mainly represented by the small radiation fields associated with the lack of lateral electronic equilibrium, the need of small detectors and the high dose delivered (up to 30 Gy). Gafchromic{sup TM} EBT2 and EBT3 films may be considered the dosimeter of choice, and the authors here provide some additional data about uniformity correction for this new generation of radiochromic films.Methods: A new analysis method using blue channel for marker dye correction was proposed for uniformity correction both for EBT2 and EBT3more » films. Symmetry, flatness, and field-width of a reference field were analyzed to provide an evaluation in a high-spatial resolution of the film uniformity for EBT3. Absolute doses were compared with thermoluminescent dosimeters (TLD) as baseline. VMAT plans with multiple noncoplanar arcs were generated with a treatment planning system on a selected pool of eleven patients with cranial lesions and then recalculated on a water-equivalent plastic phantom by Monte Carlo algorithm for patient-specific QA. 2D quantitative dose comparison parameters were calculated, for the computed and measured dose distributions, and tested for statistically significant differences.Results: Sensitometric curves showed a different behavior above dose of 5 Gy for EBT2 and EBT3 films; with the use of inhouse marker-dye correction method, the authors obtained values of 2.5% for flatness, 1.5% of symmetry, and a field width of 4.8 cm for a 5 × 5 cm{sup 2} reference field. Compared with TLD and selecting a 5% dose tolerance, the percentage of points with ICRU index below 1 was 100% for EBT2 and 83% for EBT3. Patients analysis revealed statistically significant differences (p < 0.05) between EBT2 and EBT3 in the percentage of points with gamma values <1 (p= 0.009 and p= 0.016); the percent difference as well as the mean difference between calculated and measured isodoses (20% and 80%) were found not to be significant (p= 0.074, p= 0.185, and p= 0.57).Conclusions: Excellent performances in terms of dose homogeneity were obtained using a new blue channel method for marker-dye correction on both EBT2 and EBT3 Gafchromic{sup TM} films. In comparison with TLD, the passing rates for the EBT2 film were higher than for EBT3; a good agreement with estimated data by Monte Carlo algorithm was found for both films, with some statistically significant differences again in favor of EBT2. These results suggest that the use of Gafchromic{sup TM} EBT2 and EBT3 films is appropriate for dose verification measurements in VMAT stereotactic radiosurgery; taking into account the uncertainty associated with Gafchromic film dosimetry, the use of adequate action levels is strongly advised, in particular, for EBT3.« less
Poulin, Julie; Chouinard, Sylvie; Pampoulova, Tania; Lecomte, Yves; Stip, Emmanuel; Godbout, Roger
2010-10-30
Patients with schizophrenia may have sleep disorders even when clinically stable under antipsychotic treatments. To better understand this issue, we measured sleep characteristics between 1999 and 2003 in 150 outpatients diagnosed with Diagnostic and Statistical Manual of Mental Disorders, fourth edition (DSM-IV) schizophrenia or schizoaffective disorder and 80 healthy controls using a sleep habits questionnaire. Comparisons between both groups were performed and multiple comparisons were Bonferroni corrected. Compared to healthy controls, patients with schizophrenia reported significantly increased sleep latency, time in bed, total sleep time and frequency of naps during weekdays and weekends along with normal sleep efficiency, sleep satisfaction, and feeling of restfulness in the morning. In conclusion, sleep-onset insomnia is a major, enduring disorder in middle-aged, non-hospitalized patients with schizophrenia that are otherwise clinically stable under antipsychotic and adjuvant medications. Noteworthy, these patients do not complain of sleep-maintenance insomnia but report increased sleep propensity and normal sleep satisfaction. These results may reflect circadian disturbances in schizophrenia, but objective laboratory investigations are needed to confirm subjective sleep reports. Copyright © 2009 Elsevier Ltd. All rights reserved.
High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.
Song, Shiyu; Chandraker, Manmohan; Guest, Clark C
2016-04-01
We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.
A quantitative comparison of corrective and perfective maintenance
NASA Technical Reports Server (NTRS)
Henry, Joel; Cain, James
1994-01-01
This paper presents a quantitative comparison of corrective and perfective software maintenance activities. The comparison utilizes basic data collected throughout the maintenance process. The data collected are extensive and allow the impact of both types of maintenance to be quantitatively evaluated and compared. Basic statistical techniques test relationships between and among process and product data. The results show interesting similarities and important differences in both process and product characteristics.
New approach to CT pixel-based photon dose calculations in heterogeneous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, J.W.; Henkelman, R.M.
The effects of small cavities on dose in water and the dose in a homogeneous nonunit density medium illustrate that inhomogeneities do not act independently in photon dose perturbation, and serve as two constraints which should be satisfied by approximate methods of computed tomography (CT) pixel-based dose calculations. Current methods at best satisfy only one of the two constraints and show inadequacies in some intermediate geometries. We have developed an approximate method that satisfies both these constraints and treats much of the synergistic effect of multiple inhomogeneities correctly. The method calculates primary and first-scatter doses by first-order ray tracing withmore » the first-scatter contribution augmented by a component of second scatter that behaves like first scatter. Multiple-scatter dose perturbation values extracted from small cavity experiments are used in a function which approximates the small residual multiple-scatter dose. For a wide range of geometries tested, our method agrees very well with measurements. The average deviation is less than 2% with a maximum of 3%. In comparison, calculations based on existing methods can have errors larger than 10%.« less
McKay, Gareth J; Loane, Edward; Nolan, John M; Patterson, Christopher C; Meyers, Kristin J; Mares, Julie A; Yonova-Doing, Ekaterina; Hammond, Christopher J; Beatty, Stephen; Silvestri, Giuliana
2013-01-01
Objective To investigate association of scavenger receptor class B, member 1 (SCARB1) genetic variants with serum carotenoid levels of lutein (L) and zeaxanthin (Z) and macular pigment optical density (MPOD). Design A cross-sectional study of healthy adults aged 20-70. Participants 302 participants recruited following local advertisement. Methods MPOD was measured by customized heterochromatic flicker photometry. Fasting blood samples were taken for serum L and Z measurement by HPLC and lipoprotein analysis by spectrophotometric assay. Forty-seven single nucleotide polymorphisms (SNPs) across SCARB1 were genotyped using Sequenom technology. Association analyses were performed using PLINK to compare allele and haplotype means, with adjustment for potential confounding and correction for multiple comparisons by permutation testing. Replication analysis was performed in the TwinsUK and CAREDS cohorts. Main outcome measures Odds ratios (ORs) for macular pigment optical density area, serum lutein and zeaxanthin concentrations associated with genetic variations in SCARB1 and interactions between SCARB1 and sex. Results Following multiple regression analysis with adjustment for age, body mass index, sex, high-density lipoprotein cholesterol (HDLc), low-density lipoprotein cholesterol (LDLc), triglycerides, smoking, dietary L and Z levels, 5 SNPs were significantly associated with serum L concentration and 1 SNP with MPOD (P<0.01). Only the association between rs11057841 and serum L withstood correction for multiple comparisons by permutation testing (P<0.01) and replicated in the TwinsUK cohort (P=0.014). Independent replication was also observed in the CAREDS cohort with rs10846744 (P=2×10−4), a SNP in high linkage disequilibrium with rs11057841 (r2=0.93). No significant interactions by sex were found. Haplotype analysis revealed no stronger association than obtained with single SNP analyses. Conclusions Our study has identified association between rs11057841 and serum L concentration (24% increase per T allele) in healthy subjects, independent of potential confounding factors. Our data supports further evaluation of the role for SCARB1 in the transport of macular pigment and the possible modulation of AMD risk through combating the effects of oxidative stress within the retina. PMID:23562302
The impact of prison reentry services on short-term outcomes: evidence from a multisite evaluation.
Lattimore, Pamela K; Visher, Christy A
2013-01-01
Renewed interest in prisoner rehabilitation to improve postrelease outcomes occurred in the 1990s, as policy makers reacted to burgeoning prison populations with calls to facilitate community reintegration and reduce recidivism. In 2003, the Federal government funded grants to implement locally designed reentry programs. Adult programs in 12 states were studied to determine the effects of the reentry programs on multiple outcomes. A two-stage matching procedure was used to examine the effectiveness of 12 reentry programs for adult males. In the first stage, "intact group matching" was used to identify comparison populations that were similar to program participants. In the second stage, propensity score matching was used to adjust for remaining differences between groups. Propensity score weighted logistic regression was used to examine the impact of reentry program participation on multiple outcomes measured 3 months after release. The study population was 1,697 adult males released from prisons in 2004-2005. Data consisted of interview data gathered 30 days prior to release and approximately 3 months following release, supplemented by administrative data from state departments of correction and the National Crime Information Center. Results suggest programs increased in-prison service receipt and produced modest positive outcomes across multiple domains (employment, housing, and substance use) 3 months after release. Although program participants reported fewer crimes, differences in postrelease arrest and reincarceration were not statistically significant. Incomplete implementation and service receipt by comparison group members may have resulted in insufficient statistical power to identify stronger treatment effects.
Tavazzi, Eleonora; Laganà, Maria Marcella; Bergsland, Niels; Tortorella, Paola; Pinardi, Giovanna; Lunetta, Christian; Corbo, Massimo; Rovaris, Marco
2015-03-01
Primary progressive multiple sclerosis (PPMS) and amyotrophic lateral sclerosis (ALS) seem to share some clinical and pathological features. MRI studies revealed the presence of grey matter (GM) atrophy in both diseases, but no comparative data are available. The objective was to compare the regional patterns of GM tissue loss in PPMS and ALS with voxel-based morphometry (VBM). Eighteen PPMS patients, 20 ALS patients, and 31 healthy controls (HC) were studied with a 1.5 Tesla scanner. VBM was performed to assess volumetric GM differences with age and sex as covariates. Threshold-free cluster enhancement analysis was used to obtain significant clusters. Group comparisons were tested with family-wise error correction for multiple comparisons (p < 0.05) except for HC versus MND which was tested at a level of p < 0.001 uncorrected and a cluster threshold of 20 contiguous voxels. Compared to HC, ALS patients showed GM tissue reduction in selected frontal and temporal areas, while PPMS patients showed a widespread bilateral GM volume decrease, involving both deep and cortical regions. Compared to ALS, PPMS patients showed tissue volume reductions in both deep and cortical GM areas. This preliminary study confirms that PPMS is characterized by a more diffuse cortical and subcortical GM atrophy than ALS and that, in the latter condition, brain damage is present outside the motor system. These results suggest that PPMS and ALS may share pathological features leading to GM tissue loss.
Intracalibration of particle detectors on a three-axis stabilized geostationary platform
NASA Astrophysics Data System (ADS)
Rowland, W.; Weigel, R. S.
2012-11-01
We describe an algorithm for intracalibration of measurements from plasma or energetic particle detectors on a three-axis stabilized platform. Modeling and forecasting of Earth's radiation belt environment requires data from particle instruments, and these data depend on measurements which have an inherent calibration uncertainty. Pre-launch calibration is typically performed, but on-orbit changes in the instrument often necessitate adjustment of calibration parameters to mitigate the effect of these changes on the measurements. On-orbit calibration practices for particle detectors aboard spin-stabilized spacecraft are well established. Three-axis stabilized platforms, however, pose unique challenges even when comparisons are being performed between multiple telescopes measuring the same energy ranges aboard the same satellite. This algorithm identifies time intervals when different telescopes are measuring particles with the same pitch angles. These measurements are used to compute scale factors which can be multiplied by the pre-launch geometric factor to correct any changes. The approach is first tested using measurements from GOES-13 MAGED particle detectors over a 5-month time period in 2010. We find statistically significant variations which are generally on the order of 5% or less. These results do not appear to be dependent on Poisson statistics nor upon whether a dead time correction was performed. When applied to data from a 5-month interval in 2011, one telescope shows a 10% shift from the 2010 scale factors. This technique has potential for operational use to help maintain relative calibration between multiple telescopes aboard a single satellite. It should also be extensible to inter-calibration between multiple satellites.
Naaijen, J; Bralten, J; Poelmans, G; Faraone, Stephen; Asherson, Philip; Banaschewski, Tobias; Buitelaar, Jan; Franke, Barbara; P Ebstein, Richard; Gill, Michael; Miranda, Ana; D Oades, Robert; Roeyers, Herbert; Rothenberger, Aribert; Sergeant, Joseph; Sonuga-Barke, Edmund; Anney, Richard; Mulas, Fernando; Steinhausen, Hans-Christoph; Glennon, J C; Franke, B; Buitelaar, J K
2017-01-01
Attention-deficit/hyperactivity disorder (ADHD) and autism spectrum disorders (ASD) often co-occur. Both are highly heritable; however, it has been difficult to discover genetic risk variants. Glutamate and GABA are main excitatory and inhibitory neurotransmitters in the brain; their balance is essential for proper brain development and functioning. In this study we investigated the role of glutamate and GABA genetics in ADHD severity, autism symptom severity and inhibitory performance, based on gene set analysis, an approach to investigate multiple genetic variants simultaneously. Common variants within glutamatergic and GABAergic genes were investigated using the MAGMA software in an ADHD case-only sample (n=931), in which we assessed ASD symptoms and response inhibition on a Stop task. Gene set analysis for ADHD symptom severity, divided into inattention and hyperactivity/impulsivity symptoms, autism symptom severity and inhibition were performed using principal component regression analyses. Subsequently, gene-wide association analyses were performed. The glutamate gene set showed an association with severity of hyperactivity/impulsivity (P=0.009), which was robust to correcting for genome-wide association levels. The GABA gene set showed nominally significant association with inhibition (P=0.04), but this did not survive correction for multiple comparisons. None of single gene or single variant associations was significant on their own. By analyzing multiple genetic variants within candidate gene sets together, we were able to find genetic associations supporting the involvement of excitatory and inhibitory neurotransmitter systems in ADHD and ASD symptom severity in ADHD. PMID:28072412
Van Schuerbeek, Peter; Baeken, Chris; De Mey, Johan
2016-01-01
Concerns are raising about the large variability in reported correlations between gray matter morphology and affective personality traits as ‘Harm Avoidance’ (HA). A recent review study (Mincic 2015) stipulated that this variability could come from methodological differences between studies. In order to achieve more robust results by standardizing the data processing procedure, as a first step, we repeatedly analyzed data from healthy females while changing the processing settings (voxel-based morphology (VBM) or region-of-interest (ROI) labeling, smoothing filter width, nuisance parameters included in the regression model, brain atlas and multiple comparisons correction method). The heterogeneity in the obtained results clearly illustrate the dependency of the study outcome to the opted analysis settings. Based on our results and the existing literature, we recommended the use of VBM over ROI labeling for whole brain analyses with a small or intermediate smoothing filter (5-8mm) and a model variable selection step included in the processing procedure. Additionally, it is recommended that ROI labeling should only be used in combination with a clear hypothesis and that authors are encouraged to report their results uncorrected for multiple comparisons as supplementary material to aid review studies. PMID:27096608
Creative females have larger white matter structures: Evidence from a large sample study.
Takeuchi, Hikaru; Taki, Yasuyuki; Nouchi, Rui; Yokoyama, Ryoichi; Kotozaki, Yuka; Nakagawa, Seishu; Sekiguchi, Atsushi; Iizuka, Kunio; Yamamoto, Yuki; Hanawa, Sugiko; Araki, Tsuyoshi; Makoto Miyauchi, Carlos; Shinada, Takamitsu; Sakaki, Kohei; Sassa, Yuko; Nozawa, Takayuki; Ikeda, Shigeyuki; Yokota, Susumu; Daniele, Magistro; Kawashima, Ryuta
2017-01-01
The importance of brain connectivity for creativity has been theoretically suggested and empirically demonstrated. Studies have shown sex differences in creativity measured by divergent thinking (CMDT) as well as sex differences in the structural correlates of CMDT. However, the relationships between regional white matter volume (rWMV) and CMDT and associated sex differences have never been directly investigated. In addition, structural studies have shown poor replicability and inaccuracy of multiple comparisons over the whole brain. To address these issues, we used the data from a large sample of healthy young adults (776 males and 560 females; mean age: 20.8 years, SD = 0.8). We investigated the relationship between CMDT and WMV using the newest version of voxel-based morphometry (VBM). We corrected for multiple comparisons over whole brain using the permutation-based method, which is known to be quite accurate and robust. Significant positive correlations between rWMV and CMDT scores were observed in widespread areas below the neocortex specifically in females. These associations with CMDT were not observed in analyses of fractional anisotropy using diffusion tensor imaging. Using rigorous methods, our findings further supported the importance of brain connectivity for creativity as well as its female-specific association. Hum Brain Mapp 38:414-430, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Eisinga, Rob; Heskes, Tom; Pelzer, Ben; Te Grotenhuis, Manfred
2017-01-25
The Friedman rank sum test is a widely-used nonparametric method in computational biology. In addition to examining the overall null hypothesis of no significant difference among any of the rank sums, it is typically of interest to conduct pairwise comparison tests. Current approaches to such tests rely on large-sample approximations, due to the numerical complexity of computing the exact distribution. These approximate methods lead to inaccurate estimates in the tail of the distribution, which is most relevant for p-value calculation. We propose an efficient, combinatorial exact approach for calculating the probability mass distribution of the rank sum difference statistic for pairwise comparison of Friedman rank sums, and compare exact results with recommended asymptotic approximations. Whereas the chi-squared approximation performs inferiorly to exact computation overall, others, particularly the normal, perform well, except for the extreme tail. Hence exact calculation offers an improvement when small p-values occur following multiple testing correction. Exact inference also enhances the identification of significant differences whenever the observed values are close to the approximate critical value. We illustrate the proposed method in the context of biological machine learning, were Friedman rank sum difference tests are commonly used for the comparison of classifiers over multiple datasets. We provide a computationally fast method to determine the exact p-value of the absolute rank sum difference of a pair of Friedman rank sums, making asymptotic tests obsolete. Calculation of exact p-values is easy to implement in statistical software and the implementation in R is provided in one of the Additional files and is also available at http://www.ru.nl/publish/pages/726696/friedmanrsd.zip .
Multiple scattering corrections to the Beer-Lambert law. 1: Open detector.
Tam, W G; Zardecki, A
1982-07-01
Multiple scattering corrections to the Beer-Lambert law are analyzed by means of a rigorous small-angle solution to the radiative transfer equation. Transmission functions for predicting the received radiant power-a directly measured quantity in contrast to the spectral radiance in the Beer-Lambert law-are derived. Numerical algorithms and results relating to the multiple scattering effects for laser propagation in fog, cloud, and rain are presented.
Novel methods for parameter-based analysis of myocardial tissue in MR images
NASA Astrophysics Data System (ADS)
Hennemuth, A.; Behrens, S.; Kuehnel, C.; Oeltze, S.; Konrad, O.; Peitgen, H.-O.
2007-03-01
The analysis of myocardial tissue with contrast-enhanced MR yields multiple parameters, which can be used to classify the examined tissue. Perfusion images are often distorted by motion, while late enhancement images are acquired with a different size and resolution. Therefore, it is common to reduce the analysis to a visual inspection, or to the examination of parameters related to the 17-segment-model proposed by the American Heart Association (AHA). As this simplification comes along with a considerable loss of information, our purpose is to provide methods for a more accurate analysis regarding topological and functional tissue features. In order to achieve this, we implemented registration methods for the motion correction of the perfusion sequence and the matching of the late enhancement information onto the perfusion image and vice versa. For the motion corrected perfusion sequence, vector images containing the voxel enhancement curves' semi-quantitative parameters are derived. The resulting vector images are combined with the late enhancement information and form the basis for the tissue examination. For the exploration of data we propose different modes: the inspection of the enhancement curves and parameter distribution in areas automatically segmented using the late enhancement information, the inspection of regions segmented in parameter space by user defined threshold intervals and the topological comparison of regions segmented with different settings. Results showed a more accurate detection of distorted regions in comparison to the AHA-model-based evaluation.
Shih, Ching-Hsiang; Shih, Ching-Tien; Chu, Chiung-Ling
2010-01-01
The latest researches adopted software technology turning the Nintendo Wii Balance Board into a high performance change of standing posture (CSP) detector, and assessed whether two persons with multiple disabilities would be able to control environmental stimulation using body swing (changing standing posture). This study extends Wii Balance Board functionality for standing posture correction (i.e., actively adjust abnormal standing posture) to assessed whether two persons with multiple disabilities would be able to actively correct their standing posture by controlling their favorite stimulation on/off using a Wii Balance Board with a newly developed standing posture correcting program (SPCP). The study was performed according to an ABAB design, in which A represented baseline and B represented intervention phases. Data showed that both participants significantly increased time duration of maintaining correct standing posture (TDMCSP) to activate the control system to produce environmental stimulation during the intervention phases. Practical and developmental implications of the findings were discussed.
Gettig, Jacob P
2006-04-01
To determine the prevalence of established multiple-choice test-taking correct and incorrect answer cues in the American College of Clinical Pharmacy's Updates in Therapeutics: The Pharmacotherapy Preparatory Course, 2005 Edition, as an equal or lesser surrogate indication of the prevalence of such cues in the Pharmacotherapy board certification examination. All self-assessment and patient case question-and-answer sets were assessed individually to determine if they were subject to selected correct and incorrect answer cues commonly seen in multiple-choice question writing. If the question was considered evaluable, correct answer cues-longest answer, mid-range number, one of two similar choices, and one of two opposite choices-were tallied. In addition, incorrect answer cues- inclusionary language and grammatical mismatch-were also tallied. Each cue was counted if it did what was expected or did the opposite of what was expected. Multiple cues could be identified in each question. A total of 237 (47.7%) of 497 questions in the manual were deemed evaluable. A total of 325 correct answer cues and 35 incorrect answer cues were identified in the 237 evaluable questions. Most evaluable questions contained one to two correct and/or incorrect answer cue(s). Longest answer was the most frequently identified correct answer cue; however, it was the least likely to identify the correct answer. Inclusionary language was the most frequently identified incorrect answer cue. Incorrect answer cues were considerably more likely to identify incorrect answer choices than correct answer cues were able to identify correct answer choices. The use of established multiple-choice test-taking cues is unlikely to be of significant help when taking the Pharmacotherapy board certification examination, primarily because of the lack of questions subject to such cues and the inability of correct answer cues to accurately identify correct answers. Incorrect answer cues, especially the use of inclusionary language, almost always will accurately identify an incorrect answer choice. Assuming that questions in the preparatory course manual were equal or lesser surrogates of those in the board certification examination, it is unlikely that intuition alone can replace adequate preparation and studying as the sole determinant of examination success.
Benchmarking and performance analysis of the CM-2. [SIMD computer
NASA Technical Reports Server (NTRS)
Myers, David W.; Adams, George B., II
1988-01-01
A suite of benchmarking routines testing communication, basic arithmetic operations, and selected kernel algorithms written in LISP and PARIS was developed for the CM-2. Experiment runs are automated via a software framework that sequences individual tests, allowing for unattended overnight operation. Multiple measurements are made and treated statistically to generate well-characterized results from the noisy values given by cm:time. The results obtained provide a comparison with similar, but less extensive, testing done on a CM-1. Tests were chosen to aid the algorithmist in constructing fast, efficient, and correct code on the CM-2, as well as gain insight into what performance criteria are needed when evaluating parallel processing machines.
Wilkoff, B L; Kühlkamp, V; Volosin, K; Ellenbogen, K; Waldecker, B; Kacet, S; Gillberg, J M; DeSouza, C M
2001-01-23
One of the perceived benefits of dual-chamber implantable cardioverter-defibrillators (ICDs) is the reduction in inappropriate therapy due to new detection algorithms. It was the purpose of the present investigation to propose methods to minimize bias during such comparisons and to report the arrhythmia detection clinical results of the PR Logic dual-chamber detection algorithm in the GEM DR ICD in the context of these methods. Between November 1997 and October 1998, 933 patients received the GEM DR ICD in this prospective multicenter study. A total of 4856 sustained arrhythmia episodes (n=311) with stored electrogram and marker channel were classified by the investigators; 3488 episodes (n=232) were ventricular tachycardia (VT)/ventricular fibrillation (VF), and 1368 episodes (n=149) were supraventricular tachycardia (SVT). The overall detection results were corrected for multiple episodes within a patient with the generalized estimating equations (GEE) method with an exchangeable correlation structure between episodes. The relative sensitivity for detection of sustained VT and/or VF was 100.0% (3488 of 3488, n=232; 95% CI 98.3% to 100%), the VT/VF positive predictivity was 88.4% uncorrected (3488 of 3945, n=278) and 78.1% corrected (95% CI 73.3% to 82.3%) with the GEE method, and the SVT positive predictivity was 100.0% (911 of 911, n=101; 95% CI 96% to 100%). A structured approach to analysis limits the bias inherent in the evaluation of tachycardia discrimination algorithms through the use of relative VT/VF sensitivity, VT/VF positive predictivity, and SVT positive predictivity along with corrections for multiple tachycardia episodes in a single patient.
Artificial intelligence in mitral valve analysis.
Jeganathan, Jelliffe; Knio, Ziyad; Amador, Yannis; Hai, Ting; Khamooshian, Arash; Matyal, Robina; Khabbaz, Kamal R; Mahmood, Feroze
2017-01-01
Echocardiographic analysis of mitral valve (MV) has become essential for diagnosis and management of patients with MV disease. Currently, the various software used for MV analysis require manual input and are prone to interobserver variability in the measurements. The aim of this study is to determine the interobserver variability in an automated software that uses artificial intelligence for MV analysis. Retrospective analysis of intraoperative three-dimensional transesophageal echocardiography data acquired from four patients with normal MV undergoing coronary artery bypass graft surgery in a tertiary hospital. Echocardiographic data were analyzed using the eSie Valve Software (Siemens Healthcare, Mountain View, CA, USA). Three examiners analyzed three end-systolic (ES) frames from each of the four patients. A total of 36 ES frames were analyzed and included in the study. A multiple mixed-effects ANOVA model was constructed to determine if the examiner, the patient, and the loop had a significant effect on the average value of each parameter. A Bonferroni correction was used to correct for multiple comparisons, and P = 0.0083 was considered to be significant. Examiners did not have an effect on any of the six parameters tested. Patient and loop had an effect on the average parameter value for each of the six parameters as expected (P < 0.0083 for both). We were able to conclude that using automated analysis, it is possible to obtain results with good reproducibility, which only requires minimal user intervention.
Hammerle, Albin; Meier, Fred; Heinl, Michael; Egger, Angelika; Leitinger, Georg
2017-04-01
Thermal infrared (TIR) cameras perfectly bridge the gap between (i) on-site measurements of land surface temperature (LST) providing high temporal resolution at the cost of low spatial coverage and (ii) remotely sensed data from satellites that provide high spatial coverage at relatively low spatio-temporal resolution. While LST data from satellite (LST sat ) and airborne platforms are routinely corrected for atmospheric effects, such corrections are barely applied for LST from ground-based TIR imagery (using TIR cameras; LST cam ). We show the consequences of neglecting atmospheric effects on LST cam of different vegetated surfaces at landscape scale. We compare LST measured from different platforms, focusing on the comparison of LST data from on-site radiometry (LST osr ) and LST cam using a commercially available TIR camera in the region of Bozen/Bolzano (Italy). Given a digital elevation model and measured vertical air temperature profiles, we developed a multiple linear regression model to correct LST cam data for atmospheric influences. We could show the distinct effect of atmospheric conditions and related radiative processes along the measurement path on LST cam , proving the necessity to correct LST cam data on landscape scale, despite their relatively low measurement distances compared to remotely sensed data. Corrected LST cam data revealed the dampening effect of the atmosphere, especially at high temperature differences between the atmosphere and the vegetated surface. Not correcting for these effects leads to erroneous LST estimates, in particular to an underestimation of the heterogeneity in LST, both in time and space. In the most pronounced case, we found a temperature range extension of almost 10 K.
Real-time fMRI processing with physiological noise correction - Comparison with off-line analysis.
Misaki, Masaya; Barzigar, Nafise; Zotev, Vadim; Phillips, Raquel; Cheng, Samuel; Bodurka, Jerzy
2015-12-30
While applications of real-time functional magnetic resonance imaging (rtfMRI) are growing rapidly, there are still limitations in real-time data processing compared to off-line analysis. We developed a proof-of-concept real-time fMRI processing (rtfMRIp) system utilizing a personal computer (PC) with a dedicated graphic processing unit (GPU) to demonstrate that it is now possible to perform intensive whole-brain fMRI data processing in real-time. The rtfMRIp performs slice-timing correction, motion correction, spatial smoothing, signal scaling, and general linear model (GLM) analysis with multiple noise regressors including physiological noise modeled with cardiac (RETROICOR) and respiration volume per time (RVT). The whole-brain data analysis with more than 100,000voxels and more than 250volumes is completed in less than 300ms, much faster than the time required to acquire the fMRI volume. Real-time processing implementation cannot be identical to off-line analysis when time-course information is used, such as in slice-timing correction, signal scaling, and GLM. We verified that reduced slice-timing correction for real-time analysis had comparable output with off-line analysis. The real-time GLM analysis, however, showed over-fitting when the number of sampled volumes was small. Our system implemented real-time RETROICOR and RVT physiological noise corrections for the first time and it is capable of processing these steps on all available data at a given time, without need for recursive algorithms. Comprehensive data processing in rtfMRI is possible with a PC, while the number of samples should be considered in real-time GLM. Copyright © 2015 Elsevier B.V. All rights reserved.
2008-03-01
multiplicative corrections as well as space mapping transformations for models defined over a lower dimensional space. A corrected surrogate model for the...correction functions used in [72]. If the low fidelity model g(x̃) is defined over a lower dimensional space then a space mapping transformation is...required. As defined in [21, 72], space mapping is a method of mapping between models of different dimensionality or fidelity. Let P denote the space
Zhang, Lanqiang; Guo, Youming; Rao, Changhui
2017-02-20
Multi-conjugate adaptive optics (MCAO) is the most promising technique currently developed to enlarge the corrected field of view of adaptive optics for astronomy. In this paper, we propose a new configuration of solar MCAO based on high order ground layer adaptive optics and low order high altitude correction, which result in a homogeneous correction effect in the whole field of view. An individual high order multiple direction Shack-Hartmann wavefront sensor is employed in the configuration to detect the ground layer turbulence for low altitude correction. Furthermore, the other low order multiple direction Shack-Hartmann wavefront sensor supplies the wavefront information caused by high layers' turbulence through atmospheric tomography for high altitude correction. Simulation results based on the system design at the 1-meter New Vacuum Solar Telescope show that the correction uniform of the new scheme is obviously improved compared to conventional solar MCAO configuration.
GEO-LEO reflectance band inter-comparison with BRDF and atmospheric scattering corrections
NASA Astrophysics Data System (ADS)
Chang, Tiejun; Xiong, Xiaoxiong Jack; Keller, Graziela; Wu, Xiangqian
2017-09-01
The inter-comparison of the reflective solar bands between the instruments onboard a geostationary orbit satellite and onboard a low Earth orbit satellite is very helpful to assess their calibration consistency. GOES-R was launched on November 19, 2016 and Himawari 8 was launched October 7, 2014. Unlike the previous GOES instruments, the Advanced Baseline Imager on GOES-16 (GOES-R became GOES-16 after November 29 when it reached orbit) and the Advanced Himawari Imager (AHI) on Himawari 8 have onboard calibrators for the reflective solar bands. The assessment of calibration is important for their product quality enhancement. MODIS and VIIRS, with their stringent calibration requirements and excellent on-orbit calibration performance, provide good references. The simultaneous nadir overpass (SNO) and ray-matching are widely used inter-comparison methods for reflective solar bands. In this work, the inter-comparisons are performed over a pseudo-invariant target. The use of stable and uniform calibration sites provides comparison with appropriate reflectance level, accurate adjustment for band spectral coverage difference, reduction of impact from pixel mismatching, and consistency of BRDF and atmospheric correction. The site in this work is a desert site in Australia (latitude -29.0 South; longitude 139.8 East). Due to the difference in solar and view angles, two corrections are applied to have comparable measurements. The first is the atmospheric scattering correction. The satellite sensor measurements are top of atmosphere reflectance. The scattering, especially Rayleigh scattering, should be removed allowing the ground reflectance to be derived. Secondly, the angle differences magnify the BRDF effect. The ground reflectance should be corrected to have comparable measurements. The atmospheric correction is performed using a vector version of the Second Simulation of a Satellite Signal in the Solar Spectrum modeling and BRDF correction is performed using a semi-empirical model. AHI band 1 (0.47μm) shows good matching with VIIRS band M3 with difference of 0.15%. AHI band 5 (1.69μm) shows largest difference in comparison with VIIRS M10.
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Camara, G.; Dias, L. A. V.; Mascarenhas, N. D. D.; Desouza, R. C. M.; Pereira, A. E. C.
1982-01-01
Earth's atmosphere reduces a sensors ability in currently discriminating targets. Using radiometric correction to reduce the atmospheric effects may improve considerably the performance of an automatic image interpreter. Several methods for radiometric correction from the open literature are compared leading to the development of an atmospheric correction system.
Albi, Angela; Meola, Antonio; Zhang, Fan; Kahali, Pegah; Rigolo, Laura; Tax, Chantal M W; Ciris, Pelin Aksit; Essayed, Walid I; Unadkat, Prashin; Norton, Isaiah; Rathi, Yogesh; Olubiyi, Olutayo; Golby, Alexandra J; O'Donnell, Lauren J
2018-03-01
Diffusion magnetic resonance imaging (dMRI) provides preoperative maps of neurosurgical patients' white matter tracts, but these maps suffer from echo-planar imaging (EPI) distortions caused by magnetic field inhomogeneities. In clinical neurosurgical planning, these distortions are generally not corrected and thus contribute to the uncertainty of fiber tracking. Multiple image processing pipelines have been proposed for image-registration-based EPI distortion correction in healthy subjects. In this article, we perform the first comparison of such pipelines in neurosurgical patient data. Five pipelines were tested in a retrospective clinical dMRI dataset of 9 patients with brain tumors. Pipelines differed in the choice of fixed and moving images and the similarity metric for image registration. Distortions were measured in two important tracts for neurosurgery, the arcuate fasciculus and corticospinal tracts. Significant differences in distortion estimates were found across processing pipelines. The most successful pipeline used dMRI baseline and T2-weighted images as inputs for distortion correction. This pipeline gave the most consistent distortion estimates across image resolutions and brain hemispheres. Quantitative results of mean tract distortions on the order of 1-2 mm are in line with other recent studies, supporting the potential need for distortion correction in neurosurgical planning. Novel results include significantly higher distortion estimates in the tumor hemisphere and greater effect of image resolution choice on results in the tumor hemisphere. Overall, this study demonstrates possible pitfalls and indicates that care should be taken when implementing EPI distortion correction in clinical settings. Copyright © 2018 by the American Society of Neuroimaging.
Multiplicity-dependent and nonbinomial efficiency corrections for particle number cumulants
NASA Astrophysics Data System (ADS)
Bzdak, Adam; Holzmann, Romain; Koch, Volker
2016-12-01
In this article we extend previous work on efficiency corrections for cumulant measurements [Bzdak and Koch, Phys. Rev. C 86, 044904 (2012), 10.1103/PhysRevC.86.044904; Phys. Rev. C 91, 027901 (2015), 10.1103/PhysRevC.91.027901]. We will discuss the limitations of the methods presented in these papers. Specifically we will consider multiplicity dependent efficiencies as well as nonbinomial efficiency distributions. We will discuss the most simple and straightforward methods to implement those corrections.
Multiplicity-dependent and nonbinomial efficiency corrections for particle number cumulants
Bzdak, Adam; Holzmann, Romain; Koch, Volker
2016-12-19
Here, we extend previous work on efficiency corrections for cumulant measurements [Bzdak and Koch, Phys. Rev. C 86, 044904 (2012)PRVCAN0556-281310.1103/PhysRevC.86.044904; Phys. Rev. C 91, 027901 (2015)PRVCAN0556-281310.1103/PhysRevC.91.027901]. We will then discuss the limitations of the methods presented in these papers. Specifically we will consider multiplicity dependent efficiencies as well as nonbinomial efficiency distributions. We will discuss the most simple and straightforward methods to implement those corrections.
Comparison of RCRA SWMU Corrective Action and CERCLA Remedial Action
1991-09-30
4. TITLE AND SUBTITLE 5 . FUNDING NUMBERS Comparison of RCRA SWMU Corrective Action and CERCLA Remedial Action 6. AUTHOR(S) Sam Capps Rupe, Major -1...Interim Status for TSD Facilities .................... 19 5 . Closure and Postclosure Requirements for TSD Facilities ........... 25 D. State Role... 65 1. RCRA Facility Assessment . ............................... 65 2. RCRA Facility Investigation . .............................. 66 3
Piloting a Polychotomous Partial-Credit Scoring Procedure in a Multiple-Choice Test
ERIC Educational Resources Information Center
Tsopanoglou, Antonios; Ypsilandis, George S.; Mouti, Anna
2014-01-01
Multiple-choice (MC) tests are frequently used to measure language competence because they are quick, economical and straightforward to score. While degrees of correctness have been investigated for partially correct responses in combined-response MC tests, degrees of incorrectness in distractors and the role they play in determining the…
Reverberant acoustic energy in auditoria that comprise systems of coupled rooms
NASA Astrophysics Data System (ADS)
Summers, Jason E.
2003-11-01
A frequency-dependent model for reverberant energy in coupled rooms is developed and compared with measurements for a 1:10 scale model and for Bass Hall, Ft. Worth, TX. At high frequencies, prior statistical-acoustics models are improved by geometrical-acoustics corrections for decay within sub-rooms and for energy transfer between sub-rooms. Comparisons of computational geometrical acoustics predictions based on beam-axis tracing with scale model measurements indicate errors resulting from tail-correction assuming constant quadratic growth of reflection density. Using ray tracing in the late part corrects this error. For mid-frequencies, the models are modified to account for wave effects at coupling apertures by including power transmission coefficients. Similarly, statical-acoustics models are improved through more accurate estimates of power transmission measurements. Scale model measurements are in accord with the predicted behavior. The edge-diffraction model is adapted to study transmission through apertures. Multiple-order scattering is theoretically and experimentally shown inaccurate due to neglect of slope diffraction. At low frequencies, perturbation models qualitatively explain scale model measurements. Measurements confirm relation of coupling strength to unperturbed pressure distribution on coupling surfaces. Measurements in Bass Hall exhibit effects of the coupled stage house. High frequency predictions of statistical acoustics and geometrical acoustics models and predictions of coupling apertures all agree with measurements.
Multiple testing corrections in quantitative proteomics: A useful but blunt tool.
Pascovici, Dana; Handler, David C L; Wu, Jemma X; Haynes, Paul A
2016-09-01
Multiple testing corrections are a useful tool for restricting the FDR, but can be blunt in the context of low power, as we demonstrate by a series of simple simulations. Unfortunately, in proteomics experiments low power can be common, driven by proteomics-specific issues like small effects due to ratio compression, and few replicates due to reagent high cost, instrument time availability and other issues; in such situations, most multiple testing corrections methods, if used with conventional thresholds, will fail to detect any true positives even when many exist. In this low power, medium scale situation, other methods such as effect size considerations or peptide-level calculations may be a more effective option, even if they do not offer the same theoretical guarantee of a low FDR. Thus, we aim to highlight in this article that proteomics presents some specific challenges to the standard multiple testing corrections methods, which should be employed as a useful tool but not be regarded as a required rubber stamp. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weldon, M; DiCostanzo, D; Grzetic, S
2015-06-15
Purpose: To show that a single model for Portal Domisetry (PD) can be established for beam-matched TrueBeam™ linacs that are equipped with the DMI imager (43×43cm effective area). Methods: Our department acquired 6 new TrueBeam™s, 4 “Slim” and 2 “Edge” models. The Slims were equipped with 6 and 10MV photons, and the Edges with 6MV. MLCs differed between the Slims and Edges (Millennium 120 vs HD-MLC respectively). PD model was created from data acquired using a single linac (Slim). This includes maximum field size profile, as well as output factors and acquired measured fluence using the DMI imager. All identicalmore » linacs were beam-matched, profiles were within 1% at maximum field size at a variety of depths. The profile correction file was generated from 40×40 profile acquired at 5cm depth, 95cm SSD, and was adjusted for deviation at the field edges and corners. The PD model and profile correction was applied to all six TrueBeam™s and imagers. A variety of jaw only and sliding window (SW) MLC test fields, as well as TG-119 and clinical SW and VMAT plans were run on each linac to validate the model. Results: For 6X and 10X, field by field comparison using 3mm/3% absolute gamma criteria passed 90% or better for all cases. This was also true for composite comparisons of TG-199 and clinical plans, matching our current department criteria. Conclusion: Using a single model per photon energy for PD for the TrueBeam™ equipped with a DMI imager can produce clinically acceptable results across multiple identical and matched linacs. It is also possible to use the same PD model despite different MLCs. This can save time during commissioning and software updates.« less
Huang, Yong; Feng, Ganjun; Song, Yueming; Liu, Limin; Zhou, Chunguang; Wang, Lei; Zhou, Zhongjie; Yang, Xi
2017-09-01
One-stage posterior hemivertebral resection has been proven to be an effective, reliable surgical option for treating congenital scoliosis due to a single hemivertebra. To date, however, no studies of treating unbalanced multiple hemivertebrae have appeared. This study evaluated the efficacy and safety of one-stage posterior hemivertebral resection for unbalanced multiple hemivertebrae. Altogether, we studied 15 patients with unbalanced multiple hemivertebrae who had undergone hemivertebral resection using the one-stage posterior approach with at least 2 years of follow-up. Clinical outcomes were assessed radiographically and with the Scoliosis Research Society-22 (SRS-22) score. Related complications were also recorded. The mean Cobb angle of the main curve was 62.4° (46°-98°) before surgery and 18.2° (9°-33°) at the most recent follow-up (average correction 73.3%). The compensatory cranial curve was corrected from 28.5° (11°-52°) to 9.1° (0°-30°) (average correction 70.0%). The compensatory caudal curve was corrected from 31.6° (14°-54°) to 6.9°(0°-19°) (average correction 79.1%). The segmental kyphosis/lordosis was corrected from 41.1° (-40° to 98°) to 12.3° (-25° to 41°) (average correction 65.5%). The mean growth rate of the T1-S1 length in immature patients was 9.8mm/year during the follow-up period. Health-related quality of life (SRS-22 score) had significantly improved. Complications include one wound infection and one developing deformity. One-stage posterior hemivertebral resection for unbalanced multiple hemivertebrae provides good radiographic and clinical outcomes with no severe complications when performed by an experienced surgeon. Longer follow-up to detect late complications is obligatory. Copyright © 2017 Elsevier B.V. All rights reserved.
Fast and Accurate Approximation to Significance Tests in Genome-Wide Association Studies
Zhang, Yu; Liu, Jun S.
2011-01-01
Genome-wide association studies commonly involve simultaneous tests of millions of single nucleotide polymorphisms (SNP) for disease association. The SNPs in nearby genomic regions, however, are often highly correlated due to linkage disequilibrium (LD, a genetic term for correlation). Simple Bonferonni correction for multiple comparisons is therefore too conservative. Permutation tests, which are often employed in practice, are both computationally expensive for genome-wide studies and limited in their scopes. We present an accurate and computationally efficient method, based on Poisson de-clumping heuristics, for approximating genome-wide significance of SNP associations. Compared with permutation tests and other multiple comparison adjustment approaches, our method computes the most accurate and robust p-value adjustments for millions of correlated comparisons within seconds. We demonstrate analytically that the accuracy and the efficiency of our method are nearly independent of the sample size, the number of SNPs, and the scale of p-values to be adjusted. In addition, our method can be easily adopted to estimate false discovery rate. When applied to genome-wide SNP datasets, we observed highly variable p-value adjustment results evaluated from different genomic regions. The variation in adjustments along the genome, however, are well conserved between the European and the African populations. The p-value adjustments are significantly correlated with LD among SNPs, recombination rates, and SNP densities. Given the large variability of sequence features in the genome, we further discuss a novel approach of using SNP-specific (local) thresholds to detect genome-wide significant associations. This article has supplementary material online. PMID:22140288
Tvete, Ingunn Fride; Natvig, Bent; Gåsemyr, Jørund; Meland, Nils; Røine, Marianne; Klemp, Marianne
2015-01-01
Rheumatoid arthritis patients have been treated with disease modifying anti-rheumatic drugs (DMARDs) and the newer biologic drugs. We sought to compare and rank the biologics with respect to efficacy. We performed a literature search identifying 54 publications encompassing 9 biologics. We conducted a multiple treatment comparison regression analysis letting the number experiencing a 50% improvement on the ACR score be dependent upon dose level and disease duration for assessing the comparable relative effect between biologics and placebo or DMARD. The analysis embraced all treatment and comparator arms over all publications. Hence, all measured effects of any biologic agent contributed to the comparison of all biologic agents relative to each other either given alone or combined with DMARD. We found the drug effect to be dependent on dose level, but not on disease duration, and the impact of a high versus low dose level was the same for all drugs (higher doses indicated a higher frequency of ACR50 scores). The ranking of the drugs when given without DMARD was certolizumab (ranked highest), etanercept, tocilizumab/ abatacept and adalimumab. The ranking of the drugs when given with DMARD was certolizumab (ranked highest), tocilizumab, anakinra/rituximab, golimumab/ infliximab/ abatacept, adalimumab/ etanercept [corrected]. Still, all drugs were effective. All biologic agents were effective compared to placebo, with certolizumab the most effective and adalimumab (without DMARD treatment) and adalimumab/ etanercept (combined with DMARD treatment) the least effective. The drugs were in general more effective, except for etanercept, when given together with DMARDs.
Grabitz, Clara R; Button, Katherine S; Munafò, Marcus R; Newbury, Dianne F; Pernet, Cyril R; Thompson, Paul A; Bishop, Dorothy V M
2018-01-01
Genetics and neuroscience are two areas of science that pose particular methodological problems because they involve detecting weak signals (i.e., small effects) in noisy data. In recent years, increasing numbers of studies have attempted to bridge these disciplines by looking for genetic factors associated with individual differences in behavior, cognition, and brain structure or function. However, different methodological approaches to guarding against false positives have evolved in the two disciplines. To explore methodological issues affecting neurogenetic studies, we conducted an in-depth analysis of 30 consecutive articles in 12 top neuroscience journals that reported on genetic associations in nonclinical human samples. It was often difficult to estimate effect sizes in neuroimaging paradigms. Where effect sizes could be calculated, the studies reporting the largest effect sizes tended to have two features: (i) they had the smallest samples and were generally underpowered to detect genetic effects, and (ii) they did not fully correct for multiple comparisons. Furthermore, only a minority of studies used statistical methods for multiple comparisons that took into account correlations between phenotypes or genotypes, and only nine studies included a replication sample or explicitly set out to replicate a prior finding. Finally, presentation of methodological information was not standardized and was often distributed across Methods sections and Supplementary Material, making it challenging to assemble basic information from many studies. Space limits imposed by journals could mean that highly complex statistical methods were described in only a superficial fashion. In summary, methods that have become standard in the genetics literature-stringent statistical standards, use of large samples, and replication of findings-are not always adopted when behavioral, cognitive, or neuroimaging phenotypes are used, leading to an increased risk of false-positive findings. Studies need to correct not just for the number of phenotypes collected but also for the number of genotypes examined, genetic models tested, and subsamples investigated. The field would benefit from more widespread use of methods that take into account correlations between the factors corrected for, such as spectral decomposition, or permutation approaches. Replication should become standard practice; this, together with the need for larger sample sizes, will entail greater emphasis on collaboration between research groups. We conclude with some specific suggestions for standardized reporting in this area.
Cost analysis for computer supported multiple-choice paper examinations
Mandel, Alexander; Hörnlein, Alexander; Ifland, Marianus; Lüneburg, Edeltraud; Deckert, Jürgen; Puppe, Frank
2011-01-01
Introduction: Multiple-choice-examinations are still fundamental for assessment in medical degree programs. In addition to content related research, the optimization of the technical procedure is an important question. Medical examiners face three options: paper-based examinations with or without computer support or completely electronic examinations. Critical aspects are the effort for formatting, the logistic effort during the actual examination, quality, promptness and effort of the correction, the time for making the documents available for inspection by the students, and the statistical analysis of the examination results. Methods: Since three semesters a computer program for input and formatting of MC-questions in medical and other paper-based examinations is used and continuously improved at Wuerzburg University. In the winter semester (WS) 2009/10 eleven, in the summer semester (SS) 2010 twelve and in WS 2010/11 thirteen medical examinations were accomplished with the program and automatically evaluated. For the last two semesters the remaining manual workload was recorded. Results: The cost of the formatting and the subsequent analysis including adjustments of the analysis of an average examination with about 140 participants and about 35 questions was 5-7 hours for exams without complications in the winter semester 2009/2010, about 2 hours in SS 2010 and about 1.5 hours in the winter semester 2010/11. Including exams with complications, the average time was about 3 hours per exam in SS 2010 and 2.67 hours for the WS 10/11. Discussion: For conventional multiple-choice exams the computer-based formatting and evaluation of paper-based exams offers a significant time reduction for lecturers in comparison with the manual correction of paper-based exams and compared to purely electronically conducted exams it needs a much simpler technological infrastructure and fewer staff during the exam. PMID:22205913
Cost analysis for computer supported multiple-choice paper examinations.
Mandel, Alexander; Hörnlein, Alexander; Ifland, Marianus; Lüneburg, Edeltraud; Deckert, Jürgen; Puppe, Frank
2011-01-01
Multiple-choice-examinations are still fundamental for assessment in medical degree programs. In addition to content related research, the optimization of the technical procedure is an important question. Medical examiners face three options: paper-based examinations with or without computer support or completely electronic examinations. Critical aspects are the effort for formatting, the logistic effort during the actual examination, quality, promptness and effort of the correction, the time for making the documents available for inspection by the students, and the statistical analysis of the examination results. Since three semesters a computer program for input and formatting of MC-questions in medical and other paper-based examinations is used and continuously improved at Wuerzburg University. In the winter semester (WS) 2009/10 eleven, in the summer semester (SS) 2010 twelve and in WS 2010/11 thirteen medical examinations were accomplished with the program and automatically evaluated. For the last two semesters the remaining manual workload was recorded. The cost of the formatting and the subsequent analysis including adjustments of the analysis of an average examination with about 140 participants and about 35 questions was 5-7 hours for exams without complications in the winter semester 2009/2010, about 2 hours in SS 2010 and about 1.5 hours in the winter semester 2010/11. Including exams with complications, the average time was about 3 hours per exam in SS 2010 and 2.67 hours for the WS 10/11. For conventional multiple-choice exams the computer-based formatting and evaluation of paper-based exams offers a significant time reduction for lecturers in comparison with the manual correction of paper-based exams and compared to purely electronically conducted exams it needs a much simpler technological infrastructure and fewer staff during the exam.
EMC Global Climate And Weather Modeling Branch Personnel
Comparison Statistics which includes: NCEP Raw and Bias-Corrected Ensemble Domain Averaged Bias NCEP Raw and Bias-Corrected Ensemble Domain Averaged Bias Reduction (Percents) CMC Raw and Bias-Corrected Control Forecast Domain Averaged Bias CMC Raw and Bias-Corrected Control Forecast Domain Averaged Bias Reduction
2017-02-01
Reports an error in "An integrative formal model of motivation and decision making: The MGPM*" by Timothy Ballard, Gillian Yeo, Shayne Loft, Jeffrey B. Vancouver and Andrew Neal ( Journal of Applied Psychology , 2016[Sep], Vol 101[9], 1240-1265). Equation A3 contained an error. This correct equation is provided in the erratum. (The following abstract of the original article appeared in record 2016-28692-001.) We develop and test an integrative formal model of motivation and decision making. The model, referred to as the extended multiple-goal pursuit model (MGPM*), is an integration of the multiple-goal pursuit model (Vancouver, Weinhardt, & Schmidt, 2010) and decision field theory (Busemeyer & Townsend, 1993). Simulations of the model generated predictions regarding the effects of goal type (approach vs. avoidance), risk, and time sensitivity on prioritization. We tested these predictions in an experiment in which participants pursued different combinations of approach and avoidance goals under different levels of risk. The empirical results were consistent with the predictions of the MGPM*. Specifically, participants pursuing 1 approach and 1 avoidance goal shifted priority from the approach to the avoidance goal over time. Among participants pursuing 2 approach goals, those with low time sensitivity prioritized the goal with the larger discrepancy, whereas those with high time sensitivity prioritized the goal with the smaller discrepancy. Participants pursuing 2 avoidance goals generally prioritized the goal with the smaller discrepancy. Finally, all of these effects became weaker as the level of risk increased. We used quantitative model comparison to show that the MGPM* explained the data better than the original multiple-goal pursuit model, and that the major extensions from the original model were justified. The MGPM* represents a step forward in the development of a general theory of decision making during multiple-goal pursuit. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Controlling for anthropogenically induced atmospheric variation in stable carbon isotope studies
Long, E.S.; Sweitzer, R.A.; Diefenbach, D.R.; Ben-David, M.
2005-01-01
Increased use of stable isotope analysis to examine food-web dynamics, migration, transfer of nutrients, and behavior will likely result in expansion of stable isotope studies investigating human-induced global changes. Recent elevation of atmospheric CO2 concentration, related primarily to fossil fuel combustion, has reduced atmospheric CO2 ??13C (13C/12C), and this change in isotopic baseline has, in turn, reduced plant and animal tissue ??13C of terrestrial and aquatic organisms. Such depletion in CO2 ??13C and its effects on tissue ??13C may introduce bias into ??13C investigations, and if this variation is not controlled, may confound interpretation of results obtained from tissue samples collected over a temporal span. To control for this source of variation, we used a high-precision record of atmospheric CO2 ??13C from ice cores and direct atmospheric measurements to model modern change in CO2 ??13C. From this model, we estimated a correction factor that controls for atmospheric change; this correction reduces bias associated with changes in atmospheric isotopic baseline and facilitates comparison of tissue ??13C collected over multiple years. To exemplify the importance of accounting for atmospheric CO2 ??13C depletion, we applied the correction to a dataset of collagen ??13C obtained from mountain lion (Puma concolor) bone samples collected in California between 1893 and 1995. Before correction, in three of four ecoregions collagen ??13C decreased significantly concurrent with depletion of atmospheric CO2 ??13C (n ??? 32, P ??? 0.01). Application of the correction to collagen ??13C data removed trends from regions demonstrating significant declines, and measurement error associated with the correction did not add substantial variation to adjusted estimates. Controlling for long-term atmospheric variation and correcting tissue samples for changes in isotopic baseline facilitate analysis of samples that span a large temporal range. ?? Springer-Verlag 2005.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, D; Gach, H; Li, H
Purpose: The daily treatment MRIs acquired on MR-IGRT systems, like diagnostic MRIs, suffer from intensity inhomogeneity issue, associated with B1 and B0 inhomogeneities. An improved homomorphic unsharp mask (HUM) filtering method, automatic and robust body segmentation, and imaging field-of-view (FOV) detection methods were developed to compute the multiplicative slow-varying correction field and correct the intensity inhomogeneity. The goal is to improve and normalize the voxel intensity so that the images could be processed more accurately by quantitative methods (e.g., segmentation and registration) that require consistent image voxel intensity values. Methods: HUM methods have been widely used for years. A bodymore » mask is required, otherwise the body surface in the corrected image would be incorrectly bright due to the sudden intensity transition at the body surface. In this study, we developed an improved HUM-based correction method that includes three main components: 1) Robust body segmentation on the normalized image gradient map, 2) Robust FOV detection (needed for body segmentation) using region growing and morphologic filters, and 3) An effective implementation of HUM using repeated Gaussian convolution. Results: The proposed method was successfully tested on patient images of common anatomical sites (H/N, lung, abdomen and pelvis). Initial qualitative comparisons showed that this improved HUM method outperformed three recently published algorithms (FCM, LEMS, MICO) in both computation speed (by 50+ times) and robustness (in intermediate to severe inhomogeneity situations). Currently implemented in MATLAB, it takes 20 to 25 seconds to process a 3D MRI volume. Conclusion: Compared to more sophisticated MRI inhomogeneity correction algorithms, the improved HUM method is simple and effective. The inhomogeneity correction, body mask, and FOV detection methods developed in this study would be useful as preprocessing tools for many MRI-related research and clinical applications in radiotherapy. Authors have received research grants from ViewRay and Varian.« less
Schoen, K; Snow, W M; Kaiser, H; Werner, S A
2005-01-01
The neutron index of refraction is generally derived theoretically in the Fermi approximation. However, the Fermi approximation neglects the effects of the binding of the nuclei of a material as well as multiple scattering. Calculations by Nowak introduced correction terms to the neutron index of refraction that are quadratic in the scattering length and of order 10(-3) fm for hydrogen and deuterium. These correction terms produce a small shift in the final value for the coherent scattering length of H2 in a recent neutron interferometry experiment.
Bouchet, Aude; Schütz, Markus; Chiavarino, Barbara; Crestoni, Maria Elisa; Fornarini, Simonetta; Dopfer, Otto
2015-10-21
The structure and dynamics of the highly flexible side chain of (protonated) phenylethylamino neurotransmitters are essential for their function. The geometric, vibrational, and energetic properties of the protonated neutrotransmitter 2-phenylethylamine (H(+)PEA) are characterized in the N-H stretch range by infrared photodissociation (IRPD) spectroscopy of cold ions using rare gas tagging (Rg = Ne and Ar) and anharmonic calculations at the B3LYP-D3/(aug-)cc-pVTZ level including dispersion corrections. A single folded gauche conformer (G) protonated at the basic amino group and stabilized by an intramolecular NH(+)-π interaction is observed. The dispersion-corrected density functional theory calculations reveal the important effects of dispersion on the cation-π interaction and the large vibrational anharmonicity of the NH3(+) group involved in the NH(+)-π hydrogen bond. They allow for assigning overtone and combination bands and explain anomalous intensities observed in previous IR multiple-photon dissociation spectra. Comparison with neutral PEA reveals the large effects of protonation on the geometric and electronic structure.
Nicolas, Renaud; Sibon, Igor; Hiba, Bassem
2015-01-01
The diffusion-weighted-dependent attenuation of the MRI signal E(b) is extremely sensitive to microstructural features. The aim of this study was to determine which mathematical model of the E(b) signal most accurately describes it in the brain. The models compared were the monoexponential model, the stretched exponential model, the truncated cumulant expansion (TCE) model, the biexponential model, and the triexponential model. Acquisition was performed with nine b-values up to 2500 s/mm(2) in 12 healthy volunteers. The goodness-of-fit was studied with F-tests and with the Akaike information criterion. Tissue contrasts were differentiated with a multiple comparison corrected nonparametric analysis of variance. F-test showed that the TCE model was better than the biexponential model in gray and white matter. Corrected Akaike information criterion showed that the TCE model has the best accuracy and produced the most reliable contrasts in white matter among all models studied. In conclusion, the TCE model was found to be the best model to infer the microstructural properties of brain tissue.
Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun
2014-01-01
Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation.
Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun
2014-01-01
Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation. PMID:25405760
Modeling ready biodegradability of fragrance materials.
Ceriani, Lidia; Papa, Ester; Kovarich, Simona; Boethling, Robert; Gramatica, Paola
2015-06-01
In the present study, quantitative structure activity relationships were developed for predicting ready biodegradability of approximately 200 heterogeneous fragrance materials. Two classification methods, classification and regression tree (CART) and k-nearest neighbors (kNN), were applied to perform the modeling. The models were validated with multiple external prediction sets, and the structural applicability domain was verified by the leverage approach. The best models had good sensitivity (internal ≥80%; external ≥68%), specificity (internal ≥80%; external 73%), and overall accuracy (≥75%). Results from the comparison with BIOWIN global models, based on group contribution method, show that specific models developed in the present study perform better in prediction than BIOWIN6, in particular for the correct classification of not readily biodegradable fragrance materials. © 2015 SETAC.
Holmes, Susan; Alekseyenko, Alexander; Timme, Alden; Nelson, Tyrrell; Pasricha, Pankaj Jay; Spormann, Alfred
2011-01-01
This article explains the statistical and computational methodology used to analyze species abundances collected using the LNBL Phylochip in a study of Irritable Bowel Syndrome (IBS) in rats. Some tools already available for the analysis of ordinary microarray data are useful in this type of statistical analysis. For instance in correcting for multiple testing we use Family Wise Error rate control and step-down tests (available in the multtest package). Once the most significant species are chosen we use the hypergeometric tests familiar for testing GO categories to test specific phyla and families. We provide examples of normalization, multivariate projections, batch effect detection and integration of phylogenetic covariation, as well as tree equalization and robustification methods.
InSAR Tropospheric Correction Methods: A Statistical Comparison over Different Regions
NASA Astrophysics Data System (ADS)
Bekaert, D. P.; Walters, R. J.; Wright, T. J.; Hooper, A. J.; Parker, D. J.
2015-12-01
Observing small magnitude surface displacements through InSAR is highly challenging, and requires advanced correction techniques to reduce noise. In fact, one of the largest obstacles facing the InSAR community is related to tropospheric noise correction. Spatial and temporal variations in temperature, pressure, and relative humidity result in a spatially-variable InSAR tropospheric signal, which masks smaller surface displacements due to tectonic or volcanic deformation. Correction methods applied today include those relying on weather model data, GNSS and/or spectrometer data. Unfortunately, these methods are often limited by the spatial and temporal resolution of the auxiliary data. Alternatively a correction can be estimated from the high-resolution interferometric phase by assuming a linear or a power-law relationship between the phase and topography. For these methods, the challenge lies in separating deformation from tropospheric signals. We will present results of a statistical comparison of the state-of-the-art tropospheric corrections estimated from spectrometer products (MERIS and MODIS), a low and high spatial-resolution weather model (ERA-I and WRF), and both the conventional linear and power-law empirical methods. We evaluate the correction capability over Southern Mexico, Italy, and El Hierro, and investigate the impact of increasing cloud cover on the accuracy of the tropospheric delay estimation. We find that each method has its strengths and weaknesses, and suggest that further developments should aim to combine different correction methods. All the presented methods are included into our new open source software package called TRAIN - Toolbox for Reducing Atmospheric InSAR Noise (Bekaert et al., in review), which is available to the community Bekaert, D., R. Walters, T. Wright, A. Hooper, and D. Parker (in review), Statistical comparison of InSAR tropospheric correction techniques, Remote Sensing of Environment
Huang, Yunda; Huang, Ying; Moodie, Zoe; Li, Sue; Self, Steve
2014-01-01
Summary In biomedical research such as the development of vaccines for infectious diseases or cancer, measures from the same assay are often collected from multiple sources or laboratories. Measurement error that may vary between laboratories needs to be adjusted for when combining samples across laboratories. We incorporate such adjustment in comparing and combining independent samples from different labs via integration of external data, collected on paired samples from the same two laboratories. We propose: 1) normalization of individual level data from two laboratories to the same scale via the expectation of true measurements conditioning on the observed; 2) comparison of mean assay values between two independent samples in the Main study accounting for inter-source measurement error; and 3) sample size calculations of the paired-sample study so that hypothesis testing error rates are appropriately controlled in the Main study comparison. Because the goal is not to estimate the true underlying measurements but to combine data on the same scale, our proposed methods do not require that the true values for the errorprone measurements are known in the external data. Simulation results under a variety of scenarios demonstrate satisfactory finite sample performance of our proposed methods when measurement errors vary. We illustrate our methods using real ELISpot assay data generated by two HIV vaccine laboratories. PMID:22764070
Zhang, Zhi-Yong; Wu, Rong; Wang, Qun; Zhang, Zhi-Rong; López-Pujol, Jordi; Fan, Deng-Mei; Li, De-Zhu
2013-01-01
In subtropical China, large-scale phylogeographic comparisons among multiple sympatric plants with similar ecological preferences are scarce, making generalizations about common response to historical events necessarily tentative. A phylogeographic comparison of two sympatric Chinese beeches (Fagus lucida and F. longipetiolata, 21 and 28 populations, respectively) was conducted to test whether they have responded to historical events in a concerted fashion and to determine whether their phylogeographic structure is exclusively due to Quaternary events or it is also associated with pre-Quaternary events. Twenty-three haplotypes were recovered for F. lucida and F. longipetiolata (14 each one and five shared). Both species exhibited a species-specific mosaic distribution of haplotypes, with many of them being range-restricted and even private to populations. The two beeches had comparable total haplotype diversity but F. lucida had much higher within-population diversity than F. longipetiolata. Molecular dating showed that the time to most recent common ancestor of all haplotypes was 6.36 Ma, with most haplotypes differentiating during the Quaternary. [Correction added on 14 October 2013, after first online publication: the timeunit has been corrected to ‘6.36’.] Our results support a late Miocene origin and southwards colonization of Chinese beeches when the aridity in Central Asia intensified and the monsoon climate began to dominate the East Asia. During the Quaternary, long-term isolation in subtropical mountains of China coupled with limited gene flow would have lead to the current species-specific mosaic distribution of lineages. PMID:24340187
Detector-Response Correction of Two-Dimensional γ -Ray Spectra from Neutron Capture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rusev, G.; Jandel, M.; Arnold, C. W.
2015-05-28
The neutron-capture reaction produces a large variety of γ-ray cascades with different γ-ray multiplicities. A measured spectral distribution of these cascades for each γ-ray multiplicity is of importance to applications and studies of γ-ray statistical properties. The DANCE array, a 4π ball of 160 BaF 2 detectors, is an ideal tool for measurement of neutron-capture γ-rays. The high granularity of DANCE enables measurements of high-multiplicity γ-ray cascades. The measured two-dimensional spectra (γ-ray energy, γ-ray multiplicity) have to be corrected for the DANCE detector response in order to compare them with predictions of the statistical model or use them in applications.more » The detector-response correction problem becomes more difficult for a 4π detection system than for a single detector. A trial and error approach and an iterative decomposition of γ-ray multiplets, have been successfully applied to the detector-response correction. As a result, applications of the decomposition methods are discussed for two-dimensional γ-ray spectra measured at DANCE from γ-ray sources and from the 10B(n, γ) and 113Cd(n, γ) reactions.« less
[Determination of ventricular volumes by a non-geometric method using gamma-cineangiography].
Faivre, R; Cardot, J C; Baud, M; Verdenet, J; Berthout, P; Bidet, A C; Bassand, J P; Maurat, J P
1985-08-01
The authors suggest a new way of determining ventricular volume by a non-geometric method using gamma-cineangiography. The results obtained by this method were compared with those obtained by a geometric methods and contrast ventriculography in 94 patients. The new non-geometric method supposes that the radioactive tracer is evenly distributed in the cardiovascular system so that blood radioactivity levels can be measured. The ventricular volume is then equal to the ratio of radioactivity in the LV zone to that of 1 ml of blood. Comparison of the radionuclide and angiographic data in the first 60 patients showed systematic values--despite a satisfactory statistical correlation (r = 0.87, y = 0.30 X + 6.3). This underestimation is due to the phenomenon of attenuation related to the depth of the heart in the thoracic cage and to autoabsorption at source, the degree of which depends on the ventricular volume. An empirical method of calculation allows correction for these factors by taking into account absorption in the tissues by relating to body surface area and autoabsorption at source by correcting for the surface of isotopic ventricular projection expressed in pixels. Using the data of this empirical method, the correction formula for radionuclide ventricular volume is obtained by a multiple linear regression: corrected radionuclide volume = K X measured radionuclide volume (Formula: see text). This formula was applied in the following 34 patients. The correlation between the uncorrected and corrected radionuclide volumes and the angiographic volumes was improved (r = 0.65 vs r = 0.94) and the values were more accurate (y = 0.18 X + 26 vs y = 0.96 X + 1.5).(ABSTRACT TRUNCATED AT 250 WORDS)
Gerstenecker, Adam; Eakin, Amanda; Triebel, Kristen; Martin, Roy; Swenson-Dravis, Dana; Petersen, Ronald C; Marson, Daniel
2016-06-01
Financial capacity is an instrumental activity of daily living (IADL) that comprises multiple abilities and is critical to independence and autonomy in older adults. Because of its cognitive complexity, financial capacity is often the first IADL to show decline in prodromal and clinical Alzheimer's disease and related disorders. Despite its importance, few standardized assessment measures of financial capacity exist and there is little, if any, normative data available to evaluate financial skills in the elderly. The Financial Capacity Instrument-Short Form (FCI-SF) is a brief measure of financial skills designed to evaluate financial skills in older adults with cognitive impairment. In the current study, we present age- and education-adjusted normative data for FCI-SF variables in a sample of 1344 cognitively normal, community-dwelling older adults participating in the Mayo Clinic Study of Aging (MCSA) in Olmsted County, Minnesota. Individual FCI-SF raw scores were first converted to age-corrected scaled scores based on position within a cumulative frequency distribution and then grouped within 4 empirically supported and overlapping age ranges. These age-corrected scaled scores were then converted to age- and education-corrected scaled scores using the same methodology. This study has the potential to substantially enhance financial capacity evaluations of older adults through the introduction of age- and education-corrected normative data for the FCI-SF by allowing clinicians to: (a) compare an individual's performance to that of a sample of similar age and education peers, (b) interpret various aspects of financial capacity relative to a normative sample, and (c) make comparisons between these aspects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Improved multidimensional semiclassical tunneling theory.
Wagner, Albert F
2013-12-12
We show that the analytic multidimensional semiclassical tunneling formula of Miller et al. [Miller, W. H.; Hernandez, R.; Handy, N. C.; Jayatilaka, D.; Willets, A. Chem. Phys. Lett. 1990, 172, 62] is qualitatively incorrect for deep tunneling at energies well below the top of the barrier. The origin of this deficiency is that the formula uses an effective barrier weakly related to the true energetics but correctly adjusted to reproduce the harmonic description and anharmonic corrections of the reaction path at the saddle point as determined by second order vibrational perturbation theory. We present an analytic improved semiclassical formula that correctly includes energetic information and allows a qualitatively correct representation of deep tunneling. This is done by constructing a three segment composite Eckart potential that is continuous everywhere in both value and derivative. This composite potential has an analytic barrier penetration integral from which the semiclassical action can be derived and then used to define the semiclassical tunneling probability. The middle segment of the composite potential by itself is superior to the original formula of Miller et al. because it incorporates the asymmetry of the reaction barrier produced by the known reaction exoergicity. Comparison of the semiclassical and exact quantum tunneling probability for the pure Eckart potential suggests a simple threshold multiplicative factor to the improved formula to account for quantum effects very near threshold not represented by semiclassical theory. The deep tunneling limitations of the original formula are echoed in semiclassical high-energy descriptions of bound vibrational states perpendicular to the reaction path at the saddle point. However, typically ab initio energetic information is not available to correct it. The Supporting Information contains a Fortran code, test input, and test output that implements the improved semiclassical tunneling formula.
Peng, Peng
2015-01-01
Researchers are increasingly interested in working memory (WM) training. However, it is unclear whether it strengthens comprehension in young children who are at risk for learning difficulties. We conducted a modest study of whether the training of verbal WM would improve verbal WM and passage listening comprehension, and whether training effects differed between two approaches: training with and without strategy instruction. A total of 58 first-grade children were randomly assigned to 3 groups: WM training with a rehearsal strategy, WM training without strategy instruction, and controls. Every member of the 2 training groups received a one-to-one, 35-minute session of verbal WM training on each of 10 consecutive school days, totaling 5.8 hours. Both training groups improved on trained verbal WM tasks, with the rehearsal group making greater gains. Without correction for multiple group comparisons, the rehearsal group made reliable improvements over controls on an untrained verbal WM task and on passage listening comprehension and listening retell measures. The no-strategy- instruction group outperformed controls on passage listening comprehension. When corrected for multiple contrasts, these group differences disappeared, but were associated with moderate-to-large effect sizes. Findings suggest—however tentatively—that brief but intensive verbal WM training may strengthen the verbal WM and comprehension performance of young children at risk. Necessary caveats and possible implications for theory and future research are discussed. PMID:26156961
Artificial Intelligence in Mitral Valve Analysis
Jeganathan, Jelliffe; Knio, Ziyad; Amador, Yannis; Hai, Ting; Khamooshian, Arash; Matyal, Robina; Khabbaz, Kamal R; Mahmood, Feroze
2017-01-01
Background: Echocardiographic analysis of mitral valve (MV) has become essential for diagnosis and management of patients with MV disease. Currently, the various software used for MV analysis require manual input and are prone to interobserver variability in the measurements. Aim: The aim of this study is to determine the interobserver variability in an automated software that uses artificial intelligence for MV analysis. Settings and Design: Retrospective analysis of intraoperative three-dimensional transesophageal echocardiography data acquired from four patients with normal MV undergoing coronary artery bypass graft surgery in a tertiary hospital. Materials and Methods: Echocardiographic data were analyzed using the eSie Valve Software (Siemens Healthcare, Mountain View, CA, USA). Three examiners analyzed three end-systolic (ES) frames from each of the four patients. A total of 36 ES frames were analyzed and included in the study. Statistical Analysis: A multiple mixed-effects ANOVA model was constructed to determine if the examiner, the patient, and the loop had a significant effect on the average value of each parameter. A Bonferroni correction was used to correct for multiple comparisons, and P = 0.0083 was considered to be significant. Results: Examiners did not have an effect on any of the six parameters tested. Patient and loop had an effect on the average parameter value for each of the six parameters as expected (P < 0.0083 for both). Conclusion: We were able to conclude that using automated analysis, it is possible to obtain results with good reproducibility, which only requires minimal user intervention. PMID:28393769
Peng, Peng; Fuchs, Douglas
2017-01-01
Researchers are increasingly interested in working memory (WM) training. However, it is unclear whether it strengthens comprehension in young children who are at risk for learning difficulties. We conducted a modest study of whether the training of verbal WM would improve verbal WM and passage listening comprehension and whether training effects differed between two approaches: training with and without strategy instruction. A total of 58 first-grade children were randomly assigned to three groups: WM training with a rehearsal strategy, WM training without strategy instruction, and controls. Each member of the two training groups received a one-to-one, 35-min session of verbal WM training on each of 10 consecutive school days, totaling 5.8 hr. Both training groups improved on trained verbal WM tasks, with the rehearsal group making greater gains. Without correction for multiple group comparisons, the rehearsal group made reliable improvements over controls on an untrained verbal WM task and on passage listening comprehension and listening retell measures. The no-strategy-instruction group outperformed controls on passage listening comprehension. When corrected for multiple contrasts, these group differences disappeared but were associated with moderate to large effect sizes. Findings suggest-however tentatively-that brief but intensive verbal WM training may strengthen the verbal WM and comprehension performance of young children at risk. Necessary caveats and possible implications for theory and future research are discussed. © Hammill Institute on Disabilities 2015.
Association of infectious mononucleosis with multiple sclerosis. A population-based study.
Ramagopalan, Sreeram V; Valdar, William; Dyment, David A; DeLuca, Gabriele C; Yee, Irene M; Giovannoni, Gavin; Ebers, George C; Sadovnick, A Dessa
2009-01-01
Genetic and environmental factors have important roles in multiple sclerosis (MS) susceptibility. Several studies have attempted to correlate exposure to viral illness with the subsequent development of MS. Here in a population-based Canadian cohort, we investigate the relationship between prior clinical infection or vaccination and the risk of MS. Using the longitudinal Canadian database, 14,362 MS index cases and 7,671 spouse controls were asked about history of measles, mumps, rubella, varicella and infectious mononucleosis as well as details about vaccination with measles, mumps, rubella, hepatitis B and influenza vaccines. Comparisons were made between cases and spouse controls. Spouse controls and stratification by sex appear to correct for ascertainment bias because with a single exception we found no significant differences between cases and controls for all viral exposures and vaccinations. However, 699 cases and 165 controls reported a history of infectious mononucleosis (p < 0.001, corrected odds ratio 2.06, 95% confidence interval 1.71-2.48). Females were more aware of disease history than males (p < 0.001). The data further confirms a reporting distortion between males and females. Historically reported measles, mumps, rubella, varicella and vaccination for hepatitis B, influenza, measles, mumps and rubella are not associated with increased risk of MS later in life. A clinical history of infectious mononucleosis is conspicuously associated with increased MS susceptibility. These findings support studies implicating Epstein-Barr virus in MS disease susceptibility, but a co-association between MS susceptibility and clinically apparent infectious mononucleosis cannot be excluded.
Pain and temperature processing in dementia: a clinical and neuroanatomical analysis
Fletcher, Phillip D.; Downey, Laura E.; Golden, Hannah L.; Clark, Camilla N.; Slattery, Catherine F.; Paterson, Ross W.; Rohrer, Jonathan D.; Schott, Jonathan M.; Rossor, Martin N.
2015-01-01
Symptoms suggesting altered processing of pain and temperature have been described in dementia diseases and may contribute importantly to clinical phenotypes, particularly in the frontotemporal lobar degeneration spectrum, but the basis for these symptoms has not been characterized in detail. Here we analysed pain and temperature symptoms using a semi-structured caregiver questionnaire recording altered behavioural responsiveness to pain or temperature for a cohort of patients with frontotemporal lobar degeneration (n = 58, 25 female, aged 52–84 years, representing the major clinical syndromes and representative pathogenic mutations in the C9orf72 and MAPT genes) and a comparison cohort of patients with amnestic Alzheimer’s disease (n = 20, eight female, aged 53–74 years). Neuroanatomical associations were assessed using blinded visual rating and voxel-based morphometry of patients’ brain magnetic resonance images. Certain syndromic signatures were identified: pain and temperature symptoms were particularly prevalent in behavioural variant frontotemporal dementia (71% of cases) and semantic dementia (65% of cases) and in association with C9orf72 mutations (6/6 cases), but also developed in Alzheimer’s disease (45% of cases) and progressive non-fluent aphasia (25% of cases). While altered temperature responsiveness was more common than altered pain responsiveness across syndromes, blunted responsiveness to pain and temperature was particularly associated with behavioural variant frontotemporal dementia (40% of symptomatic cases) and heightened responsiveness with semantic dementia (73% of symptomatic cases) and Alzheimer’s disease (78% of symptomatic cases). In the voxel-based morphometry analysis of the frontotemporal lobar degeneration cohort, pain and temperature symptoms were associated with grey matter loss in a right-lateralized network including insula (P < 0.05 corrected for multiple voxel-wise comparisons within the prespecified anatomical region of interest) and anterior temporal cortex (P < 0.001 uncorrected over whole brain) previously implicated in processing homeostatic signals. Pain and temperature symptoms accompanying C9orf72 mutations were specifically associated with posterior thalamic atrophy (P < 0.05 corrected for multiple voxel-wise comparisons within the prespecified anatomical region of interest). Together the findings suggest candidate cognitive and neuroanatomical bases for these salient but under-appreciated phenotypic features of the dementias, with wider implications for the homeostatic pathophysiology and clinical management of neurodegenerative diseases. PMID:26463677
Limb Correction of Polar-Orbiting Imagery for the Improved Interpretation of RGB Composites
NASA Technical Reports Server (NTRS)
Jedlovec, Gary J.; Elmer, Nicholas
2016-01-01
Red-Green-Blue (RGB) composite imagery combines information from several spectral channels into one image to aid in the operational analysis of atmospheric processes. However, infrared channels are adversely affected by the limb effect, the result of an increase in optical path length of the absorbing atmosphere between the satellite and the earth as viewing zenith angle increases. This paper reviews a newly developed technique to quickly correct for limb effects in both clear and cloudy regions using latitudinally and seasonally varying limb correction coefficients for real-time applications. These limb correction coefficients account for the increase in optical path length in order to produce limb-corrected RGB composites. The improved utility of a limb-corrected Air Mass RGB composite from the application of this approach is demonstrated using Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) imagery. However, the limb correction can be applied to any polar-orbiting sensor infrared channels, provided the proper limb correction coefficients are calculated. Corrected RGB composites provide multiple advantages over uncorrected RGB composites, including increased confidence in the interpretation of RGB features, improved situational awareness for operational forecasters, and the ability to use RGB composites from multiple sensors jointly to increase the temporal frequency of observations.
Buhk, J-H; Groth, M; Sehner, S; Fiehler, J; Schmidt, N O; Grzyska, U
2013-09-01
To evaluate a novel algorithm for correcting beam hardening artifacts caused by metal implants in computed tomography performed on a C-arm angiography system equipped with a flat panel (FP-CT). 16 datasets of cerebral FP-CT acquisitions after coil embolization of brain aneurysms in the context of acute subarachnoid hemorrhage have been reconstructed by applying a soft tissue kernel with and without a novel reconstruction filter for metal artifact correction. Image reading was performed in multiplanar reformations (MPR) in average mode on a dedicated radiological workplace in comparison to the preinterventional native multisection CT (MS-CT) scan serving as the anatomic gold standard. Two independent radiologists performed image scoring following a defined scale in direct comparison of the image data with and without artifact correction. For statistical analysis, a random intercept model was calculated. The inter-rater agreement was very high (ICC = 86.3 %). The soft tissue image quality and visualization of the CSF spaces at the level of the implants was substantially improved. The additional metal artifact correction algorithm did not induce impairment of the subjective image quality in any other brain regions. Adding metal artifact correction to FP-CT in an acute postinterventional setting helps to visualize the close vicinity of the aneurysm at a generally consistent image quality. © Georg Thieme Verlag KG Stuttgart · New York.
ERIC Educational Resources Information Center
Clariana, Roy B.
1990-01-01
Discussion of various types of feedback used in computer-assisted instruction focuses on a study of low-ability eleventh graders that compared the effectiveness of answer until correct (AUC) feedback with knowledge of correct response (KCR) feedback. Achievement data on posttests as well as time data are analyzed. (11 references) (LRW)
Method for measuring multiple scattering corrections between liquid scintillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.
2016-04-11
In this study, a time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
Espino-Hernandez, Gabriela; Gustafson, Paul; Burstyn, Igor
2011-05-14
In epidemiological studies explanatory variables are frequently subject to measurement error. The aim of this paper is to develop a Bayesian method to correct for measurement error in multiple continuous exposures in individually matched case-control studies. This is a topic that has not been widely investigated. The new method is illustrated using data from an individually matched case-control study of the association between thyroid hormone levels during pregnancy and exposure to perfluorinated acids. The objective of the motivating study was to examine the risk of maternal hypothyroxinemia due to exposure to three perfluorinated acids measured on a continuous scale. Results from the proposed method are compared with those obtained from a naive analysis. Using a Bayesian approach, the developed method considers a classical measurement error model for the exposures, as well as the conditional logistic regression likelihood as the disease model, together with a random-effect exposure model. Proper and diffuse prior distributions are assigned, and results from a quality control experiment are used to estimate the perfluorinated acids' measurement error variability. As a result, posterior distributions and 95% credible intervals of the odds ratios are computed. A sensitivity analysis of method's performance in this particular application with different measurement error variability was performed. The proposed Bayesian method to correct for measurement error is feasible and can be implemented using statistical software. For the study on perfluorinated acids, a comparison of the inferences which are corrected for measurement error to those which ignore it indicates that little adjustment is manifested for the level of measurement error actually exhibited in the exposures. Nevertheless, a sensitivity analysis shows that more substantial adjustments arise if larger measurement errors are assumed. In individually matched case-control studies, the use of conditional logistic regression likelihood as a disease model in the presence of measurement error in multiple continuous exposures can be justified by having a random-effect exposure model. The proposed method can be successfully implemented in WinBUGS to correct individually matched case-control studies for several mismeasured continuous exposures under a classical measurement error model.
NASA Astrophysics Data System (ADS)
Francis, Olivier; Baumann, Henri; Volarik, Tomas; Rothleitner, Christian; Klein, Gilbert; Seil, Marc; Dando, Nicolas; Tracey, Ray; Ullrich, Christian; Castelein, Stefaan; Hua, Hu; Kang, Wu; Chongyang, Shen; Songbo, Xuan; Hongbo, Tan; Zhengyuan, Li; Pálinkás, Vojtech; Kostelecký, Jakub; Mäkinen, Jaakko; Näränen, Jyri; Merlet, Sébastien; Farah, Tristan; Guerlin, Christine; Pereira Dos Santos, Franck; Le Moigne, Nicolas; Champollion, Cédric; Deville, Sabrina; Timmen, Ludger; Falk, Reinhard; Wilmes, Herbert; Iacovone, Domenico; Baccaro, Francesco; Germak, Alessandro; Biolcati, Emanuele; Krynski, Jan; Sekowski, Marcin; Olszak, Tomasz; Pachuta, Andrzej; Agren, Jonas; Engfeldt, Andreas; Reudink, René; Inacio, Pedro; McLaughlin, Daniel; Shannon, Geoff; Eckl, Marc; Wilkins, Tim; van Westrum, Derek; Billson, Ryan
2013-06-01
We present the results of the third European Comparison of Absolute Gravimeters held in Walferdange, Grand Duchy of Luxembourg, in November 2011. Twenty-two gravimeters from both metrological and non-metrological institutes are compared. For the first time, corrections for the laser beam diffraction and the self-attraction of the gravimeters are implemented. The gravity observations are also corrected for geophysical gravity changes that occurred during the comparison using the observations of a superconducting gravimeter. We show that these corrections improve the degree of equivalence between the gravimeters. We present the results for two different combinations of data. In the first one, we use only the observations from the metrological institutes. In the second solution, we include all the data from both metrological and non-metrological institutes. Those solutions are then compared with the official result of the comparison published previously and based on the observations of the metrological institutes and the gravity differences at the different sites as measured by non-metrological institutes. Overall, the absolute gravity meters agree with one another with a standard deviation of 3.1 µGal. Finally, the results of this comparison are linked to previous ones. We conclude with some important recommendations for future comparisons.
ERIC Educational Resources Information Center
Slepkov, Aaron D.; Vreugdenhil, Andrew J.; Shiell, Ralph C.
2016-01-01
There are numerous benefits to answer-until-correct (AUC) approaches to multiple-choice testing, not the least of which is the straightforward allotment of partial credit. However, the benefits of granting partial credit can be tempered by the inevitable increase in test scores and by fears that such increases are further contaminated by a large…
Maritime Adaptive Optics Beam Control
2010-09-01
Liquid Crystal LMS Least Mean Square MIMO Multiple- Input Multiple-Output MMDM Micromachined Membrane Deformable Mirror MSE Mean Square Error...determine how the beam is distorted, a control computer to calculate the correction to be applied, and a corrective element, usually a deformable mirror ...during this research, an overview of the system modification is provided here. Using additional mirrors and reflecting the beam to and from an
Correcting false memories: Errors must be noticed and replaced.
Mullet, Hillary G; Marsh, Elizabeth J
2016-04-01
Memory can be unreliable. For example, after reading The new baby stayed awake all night, people often misremember that the new baby cried all night (Brewer, 1977); similarly, after hearing bed, rest, and tired, people often falsely remember that sleep was on the list (Roediger & McDermott, 1995). In general, such false memories are difficult to correct, persisting despite warnings and additional study opportunities. We argue that errors must first be detected to be corrected; consistent with this argument, two experiments showed that false memories were nearly eliminated when conditions facilitated comparisons between participants' errors and corrective feedback (e.g., immediate trial-by-trial feedback that allowed direct comparisons between their responses and the correct information). However, knowledge that they had made an error was insufficient; unless the feedback message also contained the correct answer, the rate of false memories remained relatively constant. On the one hand, there is nothing special about correcting false memories: simply labeling an error as "wrong" is also insufficient for correcting other memory errors, including misremembered facts or mistranslations. However, unlike these other types of errors--which often benefit from the spacing afforded by delayed feedback--false memories require a special consideration: Learners may fail to notice their errors unless the correction conditions specifically highlight them.
Raij, Tuukka T; Mäntylä, Teemu; Kieseppä, Tuula; Suvisaari, Jaana
2015-08-30
The dopamine theory proposes the relationship of delusions to aberrant signaling in striatal circuitries that can be normalized with dopamine D2 receptor-blocking drugs. Localization of such circuitries, as well as their upstream and downstream signaling, remains poorly known. We collected functional magnetic resonance images from first-episode psychosis patients and controls during an audiovisual movie. Final analyses included 20 patients and 20 controls; another sample of 10 patients and 10 controls was used to calculate a comparison signal-time course. We identified putamen circuitry in which the signal aberrance (poor correlation with the comparison signal time course) was predicted by the dopamine theory, being greater in patients than controls; correlating positively with delusion scores; and correlating negatively with antipsychotic-equivalent dosage. In Granger causality analysis, patients showed a compromised contribution of the cortical salience network to the putamen and compromised contribution of the putamen to the default mode network. Results were corrected for multiple comparisons at the cluster level with primary voxel-wise threshold p < 0.005 for the salience network contribution, but liberal primary threshold p < 0.05 was used in other group comparisons. If replicated in larger studies, these findings may help unify and extend current hypotheses on dopaminergic dysfunction, salience processing and pathogenesis of delusions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram
2016-12-26
An estimation of the aerosol multiple-scattering reflectance is an important part of the atmospheric correction procedure in satellite ocean color data processing. Most commonly, the utilization of two near-infrared (NIR) bands to estimate the aerosol optical properties has been adopted for the estimation of the effects of aerosols. Previously, the operational Geostationary Color Ocean Imager (GOCI) atmospheric correction scheme relies on a single-scattering reflectance ratio (SSE), which was developed for the processing of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data to determine the appropriate aerosol models and their aerosol optical thicknesses. The scheme computes reflectance contributions (weighting factor) of candidate aerosol models in a single scattering domain then spectrally extrapolates the single-scattering aerosol reflectance from NIR to visible (VIS) bands using the SSE. However, it directly applies the weight value to all wavelengths in a multiple-scattering domain although the multiple-scattering aerosol reflectance has a non-linear relationship with the single-scattering reflectance and inter-band relationship of multiple scattering aerosol reflectances is non-linear. To avoid these issues, we propose an alternative scheme for estimating the aerosol reflectance that uses the spectral relationships in the aerosol multiple-scattering reflectance between different wavelengths (called SRAMS). The process directly calculates the multiple-scattering reflectance contributions in NIR with no residual errors for selected aerosol models. Then it spectrally extrapolates the reflectance contribution from NIR to visible bands for each selected model using the SRAMS. To assess the performance of the algorithm regarding the errors in the water reflectance at the surface or remote-sensing reflectance retrieval, we compared the SRAMS atmospheric correction results with the SSE atmospheric correction using both simulations and in situ match-ups with the GOCI data. From simulations, the mean errors for bands from 412 to 555 nm were 5.2% for the SRAMS scheme and 11.5% for SSE scheme in case-I waters. From in situ match-ups, 16.5% for the SRAMS scheme and 17.6% scheme for the SSE scheme in both case-I and case-II waters. Although we applied the SRAMS algorithm to the GOCI, it can be applied to other ocean color sensors which have two NIR wavelengths.
Ocean observations with EOS/MODIS: Algorithm development and post launch studies
NASA Technical Reports Server (NTRS)
Gordon, Howard R.
1995-01-01
An investigation of the influence of stratospheric aerosol on the performance of the atmospheric correction algorithm was carried out. The results indicate how the performance of the algorithm is degraded if the stratospheric aerosol is ignored. Use of the MODIS 1380 nm band to effect a correction for stratospheric aerosols was also studied. The development of a multi-layer Monte Carlo radiative transfer code that includes polarization by molecular and aerosol scattering and wind-induced sea surface roughness has been completed. Comparison tests with an existing two-layer successive order of scattering code suggests that both codes are capable of producing top-of-atmosphere radiances with errors usually less than 0.1 percent. An initial set of simulations to study the effects of ignoring the polarization of the the ocean-atmosphere light field, in both the development of the atmospheric correction algorithm and the generation of the lookup tables used for operation of the algorithm, have been completed. An algorithm was developed that can be used to invert the radiance exiting the top and bottom of the atmosphere to yield the columnar optical properties of the atmospheric aerosol under clear sky conditions over the ocean, for aerosol optical thicknesses as large as 2. The algorithm is capable of retrievals with such large optical thicknesses because all significant orders of multiple scattering are included.
A Critical Meta-Analysis of Lens Model Studies in Human Judgment and Decision-Making
Kaufmann, Esther; Reips, Ulf-Dietrich; Wittmann, Werner W.
2013-01-01
Achieving accurate judgment (‘judgmental achievement’) is of utmost importance in daily life across multiple domains. The lens model and the lens model equation provide useful frameworks for modeling components of judgmental achievement and for creating tools to help decision makers (e.g., physicians, teachers) reach better judgments (e.g., a correct diagnosis, an accurate estimation of intelligence). Previous meta-analyses of judgment and decision-making studies have attempted to evaluate overall judgmental achievement and have provided the basis for evaluating the success of bootstrapping (i.e., replacing judges by linear models that guide decision making). However, previous meta-analyses have failed to appropriately correct for a number of study design artifacts (e.g., measurement error, dichotomization), which may have potentially biased estimations (e.g., of the variability between studies) and led to erroneous interpretations (e.g., with regards to moderator variables). In the current study we therefore conduct the first psychometric meta-analysis of judgmental achievement studies that corrects for a number of study design artifacts. We identified 31 lens model studies (N = 1,151, k = 49) that met our inclusion criteria. We evaluated overall judgmental achievement as well as whether judgmental achievement depended on decision domain (e.g., medicine, education) and/or the level of expertise (expert vs. novice). We also evaluated whether using corrected estimates affected conclusions with regards to the success of bootstrapping with psychometrically-corrected models. Further, we introduce a new psychometric trim-and-fill method to estimate the effect sizes of potentially missing studies correct psychometric meta-analyses for effects of publication bias. Comparison of the results of the psychometric meta-analysis with the results of a traditional meta-analysis (which only corrected for sampling error) indicated that artifact correction leads to a) an increase in values of the lens model components, b) reduced heterogeneity between studies, and c) increases the success of bootstrapping. We argue that psychometric meta-analysis is useful for accurately evaluating human judgment and show the success of bootstrapping. PMID:24391781
Ionospheric Correction of InSAR for Accurate Ice Motion Mapping at High Latitudes
NASA Astrophysics Data System (ADS)
Liao, H.; Meyer, F. J.
2016-12-01
Monitoring the motion of the large ice sheets is of great importance for determining ice mass balance and its contribution to sea level rise. Recently the first comprehensive ice motion of the Greenland and the Antarctica have been generated with InSAR. However, these studies have indicated that the performance of InSAR-based ice motion mapping is limited by the presence of the ionosphere. This is particularly true at high latitudes and for low-frequency SAR data. Filter-based and empirical methods (e.g., removing polynomials), which have often been used to mitigate ionospheric effects, are often ineffective in these areas due to the typically strong spatial variability of ionospheric phase delay in high latitudes and due to the risk of removing true deformation signals from the observations. In this study, we will first present an outline of our split-spectrum InSAR-based ionospheric correction approach and particularly highlight how our method improves upon published techniques, such as the multiple sub-band approach to boost estimation accuracy as well as advanced error correction and filtering algorithms. We applied our work flow to a large number of ionosphere-affected dataset over the large ice sheets to estimate the benefit of ionospheric correction on ice motion mapping accuracy. Appropriate test sites over Greenland and the Antarctic have been chosen through cooperation with authors (UW, Ian Joughin) of previous ice motion studies. To demonstrate the magnitude of ionospheric noise and to showcase the performance of ionospheric correction, we will show examples of ionospheric-affected InSAR data and our ionosphere corrected result for comparison in visual. We also compared the corrected phase data to known ice velocity fields quantitatively for the analyzed areas from experts in ice velocity mapping. From our studies we found that ionospheric correction significantly reduces biases in ice velocity estimates and boosts accuracy by a factor that depends on a set of system (range bandwidth, temporal and spatial baseline) and processing parameters (e.g., filtering strength and sub-band configuration). A case study in Greenland is attached below.
A critical meta-analysis of lens model studies in human judgment and decision-making.
Kaufmann, Esther; Reips, Ulf-Dietrich; Wittmann, Werner W
2013-01-01
Achieving accurate judgment ('judgmental achievement') is of utmost importance in daily life across multiple domains. The lens model and the lens model equation provide useful frameworks for modeling components of judgmental achievement and for creating tools to help decision makers (e.g., physicians, teachers) reach better judgments (e.g., a correct diagnosis, an accurate estimation of intelligence). Previous meta-analyses of judgment and decision-making studies have attempted to evaluate overall judgmental achievement and have provided the basis for evaluating the success of bootstrapping (i.e., replacing judges by linear models that guide decision making). However, previous meta-analyses have failed to appropriately correct for a number of study design artifacts (e.g., measurement error, dichotomization), which may have potentially biased estimations (e.g., of the variability between studies) and led to erroneous interpretations (e.g., with regards to moderator variables). In the current study we therefore conduct the first psychometric meta-analysis of judgmental achievement studies that corrects for a number of study design artifacts. We identified 31 lens model studies (N = 1,151, k = 49) that met our inclusion criteria. We evaluated overall judgmental achievement as well as whether judgmental achievement depended on decision domain (e.g., medicine, education) and/or the level of expertise (expert vs. novice). We also evaluated whether using corrected estimates affected conclusions with regards to the success of bootstrapping with psychometrically-corrected models. Further, we introduce a new psychometric trim-and-fill method to estimate the effect sizes of potentially missing studies correct psychometric meta-analyses for effects of publication bias. Comparison of the results of the psychometric meta-analysis with the results of a traditional meta-analysis (which only corrected for sampling error) indicated that artifact correction leads to a) an increase in values of the lens model components, b) reduced heterogeneity between studies, and c) increases the success of bootstrapping. We argue that psychometric meta-analysis is useful for accurately evaluating human judgment and show the success of bootstrapping.
NASA Astrophysics Data System (ADS)
Shiroishi, Mark S.; Gupta, Vikash; Bigjahan, Bavrina; Cen, Steven Y.; Rashid, Faisal; Hwang, Darryl H.; Lerner, Alexander; Boyko, Orest B.; Liu, Chia-Shang Jason; Law, Meng; Thompson, Paul M.; Jahanshad, Neda
2017-11-01
Background: Increases in cancer survival have made understanding the basis of cancer-related cognitive impairment (CRCI) more important. CRCI neuroimaging studies have traditionally used dedicated research brain MRIs in breast cancer survivors with small sample sizes; little is known about other non-CNS cancers. However, there is a wealth of unused data from clinically-indicated MRIs that could be used to study CRCI. Objective: Evaluate brain cortical structural differences in those with non-CNS cancers using clinically-indicated MRIs. Design: Cross-sectional Patients: Adult non-CNS cancer and non-cancer control (C) patients who underwent clinically-indicated MRIs. Methods: Brain cortical surface area and thickness were measured using 3D T1-weighted images. An age-adjusted linear regression model was used and the Benjamini and Hochberg false discovery rate (FDR) corrected for multiple comparisons. Group comparisons were: cancer cases with chemotherapy (Ch+), cancer cases without chemotherapy (Ch-) and subgroup of lung cancer cases with and without chemotherapy vs C. Results: Sixty-four subjects were analyzed: 22 Ch+, 23 Ch- and 19 C patients. Subgroup analysis of 16 LCa was also performed. Statistically significant decreases in either cortical surface area or thickness were found in multiple ROIs primarily within the frontal and temporal lobes for all comparisons. Limitations: Several limitations were apparent including a small sample size that precluded adjustment for other covariates. Conclusions: Our preliminary results suggest that various types of non-CNS cancers, both with and without chemotherapy, may result in brain structural abnormalities. Also, there is a wealth of untapped clinical MRIs that could be used for future CRCI studies.
Kim, Jae-Hun; Ha, Tae Lin; Im, Geun Ho; Yang, Jehoon; Seo, Sang Won; Chung, Julius Juhyun; Chae, Sun Young; Lee, In Su; Lee, Jung Hee
2014-03-05
In this study, we have shown the potential of a voxel-based analysis for imaging amyloid plaques and its utility in monitoring therapeutic response in Alzheimer's disease (AD) mice using manganese oxide nanoparticles conjugated with an antibody of Aβ1-40 peptide (HMON-abAβ40). T1-weighted MR brain images of a drug-treated AD group (n=7), a nontreated AD group (n=7), and a wild-type group (n=7) were acquired using a 7.0 T MRI system before (D-1), 24-h (D+1) after, and 72-h (D+3) after injection with an HMON-abAβ40 contrast agent. For the treatment of AD mice, DAPT was injected intramuscularly into AD transgenic mice (50 mg/kg of body weight). For voxel-based analysis, the skull-stripped mouse brain images were spatially normalized, and these voxels' intensities were corrected to reduce voxel intensity differences across scans in different mice. Statistical analysis showed higher normalized MR signal intensity in the frontal cortex and hippocampus of AD mice over wild-type mice on D+1 and D+3 (P<0.01, uncorrected for multiple comparisons). After the treatment of AD mice, the normalized MR signal intensity in the frontal cortex and hippocampus decreased significantly in comparison with nontreated AD mice on D+1 and D+3 (P<0.01, uncorrected for multiple comparisons). These results were confirmed by histological analysis using a thioflavin staining. This unique strategy allows us to detect brain regions that are subjected to amyloid plaque deposition and has the potential for human applications in monitoring therapeutic response for drug development in AD.
ERIC Educational Resources Information Center
Santos, Maria; Lopez-Serrano, Sonia; Manchon, Rosa M.
2010-01-01
Framed in a cognitively-oriented strand of research on corrective feedback (CF) in SLA, the controlled three-stage (composition/comparison-noticing/revision) study reported in this paper investigated the effects of two forms of direct CF (error correction and reformulation) on noticing and uptake, as evidenced in the written output produced by a…
Accurate and fast multiple-testing correction in eQTL studies.
Sul, Jae Hoon; Raj, Towfique; de Jong, Simone; de Bakker, Paul I W; Raychaudhuri, Soumya; Ophoff, Roel A; Stranger, Barbara E; Eskin, Eleazar; Han, Buhm
2015-06-04
In studies of expression quantitative trait loci (eQTLs), it is of increasing interest to identify eGenes, the genes whose expression levels are associated with variation at a particular genetic variant. Detecting eGenes is important for follow-up analyses and prioritization because genes are the main entities in biological processes. To detect eGenes, one typically focuses on the genetic variant with the minimum p value among all variants in cis with a gene and corrects for multiple testing to obtain a gene-level p value. For performing multiple-testing correction, a permutation test is widely used. Because of growing sample sizes of eQTL studies, however, the permutation test has become a computational bottleneck in eQTL studies. In this paper, we propose an efficient approach for correcting for multiple testing and assess eGene p values by utilizing a multivariate normal distribution. Our approach properly takes into account the linkage-disequilibrium structure among variants, and its time complexity is independent of sample size. By applying our small-sample correction techniques, our method achieves high accuracy in both small and large studies. We have shown that our method consistently produces extremely accurate p values (accuracy > 98%) for three human eQTL datasets with different sample sizes and SNP densities: the Genotype-Tissue Expression pilot dataset, the multi-region brain dataset, and the HapMap 3 dataset. Copyright © 2015 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Viddeleer, Alain R; Sijens, Paul E; van Ooijen, Peter M A; Kuypers, Paul D L; Hovius, Steven E R; Oudkerk, Matthijs
2009-08-01
Nerve regeneration could be monitored by comparing MRI image intensities in time, as denervated muscles display increased signal intensity in STIR sequences. In this study long-term reproducibility of STIR image intensity was assessed under clinical conditions and the required image intensity nonuniformity correction was improved by using phantom scans obtained at multiple positions. Three-dimensional image intensity nonuniformity was investigated in phantom scans. Next, over a three-year period, 190 clinical STIR hand scans were obtained using a standardized acquisition protocol, and corrected for intensity nonuniformity by using the results of phantom scanning. The results of correction with 1, 3, and 11 phantom scans were compared. The image intensities in calibration tubes close to the hands were measured every time to determine the reproducibility of our method. With calibration, the reproducibility of STIR image intensity improved from 7.8 to 6.4%. Image intensity nonuniformity correction with 11 phantom scans gave significantly better results than correction with 1 or 3 scans. The image intensities in clinical STIR images acquired at different times can be compared directly, provided that the acquisition protocol is standardized and that nonuniformity correction is applied. Nonuniformity correction is preferably based on multiple phantom scans.
NASA Astrophysics Data System (ADS)
Johnson, Fiona; Sharma, Ashish
2011-04-01
Empirical scaling approaches for constructing rainfall scenarios from general circulation model (GCM) simulations are commonly used in water resources climate change impact assessments. However, these approaches have a number of limitations, not the least of which is that they cannot account for changes in variability or persistence at annual and longer time scales. Bias correction of GCM rainfall projections offers an attractive alternative to scaling methods as it has similar advantages to scaling in that it is computationally simple, can consider multiple GCM outputs, and can be easily applied to different regions or climatic regimes. In addition, it also allows for interannual variability to evolve according to the GCM simulations, which provides additional scenarios for risk assessments. This paper compares two scaling and four bias correction approaches for estimating changes in future rainfall over Australia and for a case study for water supply from the Warragamba catchment, located near Sydney, Australia. A validation of the various rainfall estimation procedures is conducted on the basis of the latter half of the observational rainfall record. It was found that the method leading to the lowest prediction errors varies depending on the rainfall statistic of interest. The flexibility of bias correction approaches in matching rainfall parameters at different frequencies is demonstrated. The results also indicate that for Australia, the scaling approaches lead to smaller estimates of uncertainty associated with changes to interannual variability for the period 2070-2099 compared to the bias correction approaches. These changes are also highlighted using the case study for the Warragamba Dam catchment.
NASA Astrophysics Data System (ADS)
Babonis, G. S.; Csatho, B. M.; Schenk, A. F.
2016-12-01
We present a new record of Antarctic ice thickness changes, reconstructed from ICESat laser altimetry observations, from 2004-2009, at over 100,000 locations across the Antarctic Ice Sheet (AIS). This work generates elevation time series at ICESat groundtrack crossover regions on an observation-by-observation basis, with rigorous, quantified, error estimates using the SERAC approach (Schenk and Csatho, 2012). The results include average and annual elevation, volume and mass changes in Antarctica, fully corrected for glacial isostatic adjustment (GIA) and known intercampaign biases; and partitioned into contributions from surficial processes (e.g. firn densification) and ice dynamics. The modular flexibility of the SERAC framework allows for the assimilation of multiple ancillary datasets (e.g. GIA models, Intercampaign Bias Corrections, IBC), in a common framework, to calculate mass changes for several different combinations of GIA models and IBCs and to arrive at a measure of variability from these results. We are able to determine the effect these corrections have on annual and average volume and mass change calculations in Antarctica, and to explore how these differences vary between drainage basins and with elevation. As such, this contribution presents a method that compliments, and is consistent with, the 2012 Ice sheet Mass Balance Inter-comparison Exercise (IMBIE) results (Shepherd 2012). Additionally, this work will contribute to the 2016 IMBIE, which seeks to reconcile ice sheet mass changes from different observations,, including laser altimetry, using a different methodologies and ancillary datasets including GIA models, Firn Densification Models, and Intercampaign Bias Corrections.
Giorgio, Antonio; Zhang, Jian; Stromillo, Maria Laura; Rossi, Francesca; Battaglini, Marco; Nichelli, Lucia; Mortilla, Marzia; Portaccio, Emilio; Hakiki, Bahia; Amato, Maria Pia; De Stefano, Nicola
2017-01-01
Pediatric-onset multiple sclerosis (POMS) may represent a model of vulnerability to damage occurring during a period of active maturation of the human brain. Whereas adaptive mechanisms seem to take place in the POMS brain in the short-medium term, natural history studies have shown that these patients reach irreversible disability, despite slower progression, at a significantly younger age than adult-onset MS (AOMS) patients. We tested for the first time whether significant brain alterations already occurred in POMS patients in their early adulthood and with no or minimal disability ( n = 15) in comparison with age- and disability-matched AOMS patients ( n = 14) and to normal controls (NC, n = 20). We used a multimodal MRI approach by modeling, using FSL, voxelwise measures of microstructural integrity of white matter tracts and gray matter volumes with those of intra- and internetwork functional connectivity (FC) (analysis of variance, p ≤ 0.01, corrected for multiple comparisons across space). POMS patients showed, when compared with both NC and AOMS patients, altered measures of diffusion tensor imaging (reduced fractional anisotropy and/or increased diffusivities) and higher probability of lesion occurrence in a clinically eloquent region for physical disability such as the posterior corona radiata. In addition, POMS patients showed, compared with the other two groups, reduced long-range FC, assessed from resting functional MRI, between default mode network and secondary visual network, whose interaction subserves important cognitive functions such as spatial attention and visual learning. Overall, this pattern of structural damage and brain connectivity disruption in early adult POMS patients with no or minimal clinical disability might explain their unfavorable clinical outcome in the long term.
Vaccarella, Salvatore; Söderlund-Strand, Anna; Franceschi, Silvia; Plummer, Martyn; Dillner, Joakim
2013-01-01
To evaluate the pattern of co-infection of human papillomavirus (HPV) types in both sexes in Sweden. Cell samples from genital swabs, first-void urine, and genital swabs immersed in first-void urine were collected in the present cross-sectional High Throughput HPV Monitoring study. Overall, 31,717 samples from women and 9,949 from men (mean age 25) were tested for 16 HPV types using mass spectrometry. Multilevel logistic regression was used to estimate the expected number of multiple infections with specific HPV types, adjusted for age, type of sample, and accounting for correlations between HPV types due to unobserved risk factors using sample-level random effects. Bonferroni correction was used to allow for multiple comparisons (120). Observed-to-expected ratio for any multiple infections was slightly above unity in both sexes, but, for most 2-type combinations, there was no evidence of significant departure from expected numbers. HPV6/18 was found more often and HPV51/68 and 6/68 less often than expected. However, HPV68 tended to be generally underrepresented in co-infections, suggesting a sub-optimal performance of our testing method for this HPV type. We found no evidence for positive or negative clustering between HPV types included in the current prophylactic vaccines and other untargeted oncogenic types, in either sex.
Improved Density Functional Tight Binding Potentials for Metalloid Aluminum Clusters
2016-06-01
simulations of the oxidation of Al4Cp * 4 show reasonable comparison with a DFT-based Car -Parrinello method, including correct prediction of hydride transfers...comparison with a DFT-based Car -Parrinello method, including correct prediction of hydride transfers from Cp* to the metal centers during the...initio molecular dynamics of the oxidation of Al4Cp * 4 using a DFT-based Car -Parrinello method. This simulation, which 43 several months on the
International comparison of activity measurements of a solution of 75Se
NASA Astrophysics Data System (ADS)
Ratel, Guy
2002-04-01
Activity measurements of a solution of 75Se, supplied by the BIPM, have been carried out by 21 laboratories within the framework of an international comparison. Seven different methods were used. Details on source preparation, experimental facilities and counting data are reported. The measured activity-concentration values show a total spread of 6.62% before correction and 6.02% after correction for delayed events, with standard deviations of the unweighted means of 0.45% and 0.36%, respectively. The correction for delayed events was measured directly by four laboratories. Unfortunately no consensus on the activity value could be deduced from their results. The results of the comparison have been entered in the tables of the International Reference System (SIR) for γ-ray emitting radionuclides. The half-life of the metastable state was also determined by two laboratories and found to be in good agreement with the values found in the literature.
Influence of Additive and Multiplicative Structure and Direction of Comparison on the Reversal Error
ERIC Educational Resources Information Center
González-Calero, José Antonio; Arnau, David; Laserna-Belenguer, Belén
2015-01-01
An empirical study has been carried out to evaluate the potential of word order matching and static comparison as explanatory models of reversal error. Data was collected from 214 undergraduate students who translated a set of additive and multiplicative comparisons expressed in Spanish into algebraic language. In these multiplicative comparisons…
Mansour, J K; Beaudry, J L; Lindsay, R C L
2017-12-01
Eyewitness identification experiments typically involve a single trial: A participant views an event and subsequently makes a lineup decision. As compared to this single-trial paradigm, multiple-trial designs are more efficient, but significantly reduce ecological validity and may affect the strategies that participants use to make lineup decisions. We examined the effects of a number of forensically relevant variables (i.e., memory strength, type of disguise, degree of disguise, and lineup type) on eyewitness accuracy, choosing, and confidence across 12 target-present and 12 target-absent lineup trials (N = 349; 8,376 lineup decisions). The rates of correct rejections and choosing (across both target-present and target-absent lineups) did not vary across the 24 trials, as reflected by main effects or interactions with trial number. Trial number had a significant but trivial quadratic effect on correct identifications (OR = 0.99) and interacted significantly, but again trivially, with disguise type (OR = 1.00). Trial number did not significantly influence participants' confidence in correct identifications, confidence in correct rejections, or confidence in target-absent selections. Thus, multiple-trial designs appear to have minimal effects on eyewitness accuracy, choosing, and confidence. Researchers should thus consider using multiple-trial designs for conducting eyewitness identification experiments.
Evaluation of a low-cost optical particle counter (Alphasense OPC-N2) for ambient air monitoring
NASA Astrophysics Data System (ADS)
Crilley, Leigh R.; Shaw, Marvin; Pound, Ryan; Kramer, Louisa J.; Price, Robin; Young, Stuart; Lewis, Alastair C.; Pope, Francis D.
2018-02-01
A fast-growing area of research is the development of low-cost sensors for measuring air pollutants. The affordability and size of low-cost particle sensors makes them an attractive option for use in experiments requiring a number of instruments such as high-density spatial mapping. However, for these low-cost sensors to be useful for these types of studies their accuracy and precision need to be quantified. We evaluated the Alphasense OPC-N2, a promising low-cost miniature optical particle counter, for monitoring ambient airborne particles at typical urban background sites in the UK. The precision of the OPC-N2 was assessed by co-locating 14 instruments at a site to investigate the variation in measured concentrations. Comparison to two different reference optical particle counters as well as a TEOM-FDMS enabled the accuracy of the OPC-N2 to be evaluated. Comparison of the OPC-N2 to the reference optical instruments shows some limitations for measuring mass concentrations of PM1, PM2.5 and PM10. The OPC-N2 demonstrated a significant positive artefact in measured particle mass during times of high ambient RH (> 85 %) and a calibration factor was developed based upon κ-Köhler theory, using average bulk particle aerosol hygroscopicity. Application of this RH correction factor resulted in the OPC-N2 measurements being within 33 % of the TEOM-FDMS, comparable to the agreement between a reference optical particle counter and the TEOM-FDMS (20 %). Inter-unit precision for the 14 OPC-N2 sensors of 22 ± 13 % for PM10 mass concentrations was observed. Overall, the OPC-N2 was found to accurately measure ambient airborne particle mass concentration provided they are (i) correctly calibrated and (ii) corrected for ambient RH. The level of precision demonstrated between multiple OPC-N2s suggests that they would be suitable devices for applications where the spatial variability in particle concentration was to be determined.
Do Pigeons Prefer Information in the Absence of Differential Reinforcement?
Zentall, Thomas R.; Stagner, Jessica P.
2012-01-01
Prior research indicates that pigeons do not prefer an alternative that provides a sample (for matching-to-sample) over an alternative that does not provide a sample (i.e., there is no indication of which comparison stimulus is correct). However, Zentall and Stagner (2010) showed that when delay of reinforcement was controlled, pigeons had a strong preference for matching over pseudo-matching (there was a sample but it did not indicate which comparison stimulus was correct). Experiment 1 of the present study replicated and extended the results of the Zentall and Stagner study by including an identity relation between the sample and one of the comparison stimuli in both the matching and pseudo-matching tasks. In Experiment 2, when we asked if the pigeons would still prefer matching if we equated the two tasks for probability of reinforcement, we found no systematic preference for matching over pseudo-matching. Thus, it appears that in the absence of differential reinforcement, the information provided by a sample that signals which of the two comparison stimuli is correct is insufficient to produce a preference for that alternative. PMID:22367755
A Comparison of Error-Correction Procedures on Skill Acquisition during Discrete-Trial Instruction
ERIC Educational Resources Information Center
Carroll, Regina A.; Joachim, Brad T.; St. Peter, Claire C.; Robinson, Nicole
2015-01-01
Previous research supports the use of a variety of error-correction procedures to facilitate skill acquisition during discrete-trial instruction. We used an adapted alternating treatments design to compare the effects of 4 commonly used error-correction procedures on skill acquisition for 2 children with attention deficit hyperactivity disorder…
Fast and robust control of two interacting spins
NASA Astrophysics Data System (ADS)
Yu, Xiao-Tong; Zhang, Qi; Ban, Yue; Chen, Xi
2018-06-01
Rapid preparation, manipulation, and correction of spin states with high fidelity are requisite for quantum information processing and quantum computing. In this paper, we propose a fast and robust approach for controlling two spins with Heisenberg and Ising interactions. By using the concept of shortcuts to adiabaticity, we first inverse design the driving magnetic fields for achieving fast spin flip or generating the entangled Bell state, and further optimize them with respect to the error and fluctuation. In particular, the designed shortcut protocols can efficiently suppress the unwanted transition or control error induced by anisotropic antisymmetric Dzyaloshinskii-Moriya exchange. Several examples and comparisons are illustrated, showing the advantages of our methods. Finally, we emphasize that the results can be naturally extended to multiple interacting spins and other quantum systems in an analogous fashion.
NASA Astrophysics Data System (ADS)
Lü, Hua-Ping; Wang, Shi-Hong; Li, Xiao-Wen; Tang, Guo-Ning; Kuang, Jin-Yu; Ye, Wei-Ping; Hu, Gang
2004-06-01
Two-dimensional one-way coupled map lattices are used for cryptography where multiple space units produce chaotic outputs in parallel. One of the outputs plays the role of driving for synchronization of the decryption system while the others perform the function of information encoding. With this separation of functions the receiver can establish a self-checking and self-correction mechanism, and enjoys the advantages of both synchronous and self-synchronizing schemes. A comparison between the present system with the system of advanced encryption standard (AES) is presented in the aspect of channel noise influence. Numerical investigations show that our system is much stronger than AES against channel noise perturbations, and thus can be better used for secure communications with large channel noise.
Fully-coupled analysis of jet mixing problems. Part 1. Shock-capturing model, SCIPVIS
NASA Technical Reports Server (NTRS)
Dash, S. M.; Wolf, D. E.
1984-01-01
A computational model, SCIPVIS, is described which predicts the multiple cell shock structure in imperfectly expanded, turbulent, axisymmetric jets. The model spatially integrates the parabolized Navier-Stokes jet mixing equations using a shock-capturing approach in supersonic flow regions and a pressure-split approximation in subsonic flow regions. The regions are coupled using a viscous-characteristic procedure. Turbulence processes are represented via the solution of compressibility-corrected two-equation turbulence models. The formation of Mach discs in the jet and the interactive analysis of the wake-like mixing process occurring behind Mach discs is handled in a rigorous manner. Calculations are presented exhibiting the fundamental interactive processes occurring in supersonic jets and the model is assessed via comparisons with detailed laboratory data for a variety of under- and overexpanded jets.
Colom, Roberto; Stein, Jason L.; Rajagopalan, Priya; Martínez, Kenia; Hermel, David; Wang, Yalin; Álvarez-Linera, Juan; Burgaleta, Miguel; Quiroga, MªÁngeles; Shih, Pei Chun; Thompson, Paul M.
2014-01-01
Here we apply a method for automated segmentation of the hippocampus in 3D high-resolution structural brain MRI scans. One hundred and four healthy young adults completed twenty one tasks measuring abstract, verbal, and spatial intelligence, along with working memory, executive control, attention, and processing speed. After permutation tests corrected for multiple comparisons across vertices (p < .05) significant relationships were found for spatial intelligence, spatial working memory, and spatial executive control. Interactions with sex revealed significant relationships with the general factor of intelligence (g), along with abstract and spatial intelligence. These correlations were mainly positive for males but negative for females, which might support the efficiency hypothesis in women. Verbal intelligence, attention, and processing speed were not related to hippocampal structural differences. PMID:25632167
Comparison of FLAASH and QUAC atmospheric correction methods for Resourcesat-2 LISS-IV data
NASA Astrophysics Data System (ADS)
Saini, V.; Tiwari, R. K.; Gupta, R. P.
2016-05-01
The LISS-IV sensor aboard Resourcesat-2 is a modern relatively high resolution multispectral sensor having immense potential for generation of good quality land use land cover maps. It generates data in high (10-bit) radiometric resolution and 5.8 m spatial resolution and has three bands in the visible-near infrared region. This is of particular importance to global community as the data are provided at highly competitive prices. However, no literature describing the atmospheric correction of Resourcesat-2-LISS-IV data could be found. Further, without atmospheric correction full radiometric potential of any remote sensing data remains underutilized. The FLAASH and QUAC module of ENVI software are highly used by researchers for atmospheric correction of popular remote sensing data such as Landsat, SPOT, IKONOS, LISS-I, III etc. This article outlines a methodology for atmospheric correction of Resourcesat-2-LISS-IV data. Also, a comparison of reflectance from different atmospheric correction modules (FLAASH and QUAC) with TOA and standard data has been made to determine the best suitable method for reflectance estimation for this sensor.
Murphy, Kathleen R.; Butler, Kenna D.; Spencer, Robert G. M.; Stedmon, Colin A.; Boehme, Jennifer R.; Aiken, George R.
2010-01-01
The fluorescent properties of dissolved organic matter (DOM) are often studied in order to infer DOM characteristics in aquatic environments, including source, quantity, composition, and behavior. While a potentially powerful technique, a single widely implemented standard method for correcting and presenting fluorescence measurements is lacking, leading to difficulties when comparing data collected by different research groups. This paper reports on a large-scale interlaboratory comparison in which natural samples and well-characterized fluorophores were analyzed in 20 laboratories in the U.S., Europe, and Australia. Shortcomings were evident in several areas, including data quality-assurance, the accuracy of spectral correction factors used to correct EEMs, and the treatment of optically dense samples. Data corrected by participants according to individual laboratory procedures were more variable than when corrected under a standard protocol. Wavelength dependency in measurement precision and accuracy were observed within and between instruments, even in corrected data. In an effort to reduce future occurrences of similar problems, algorithms for correcting and calibrating EEMs are described in detail, and MATLAB scripts for implementing the study's protocol are provided. Combined with the recent expansion of spectral fluorescence standards, this approach will serve to increase the intercomparability of DOM fluorescence studies.
Joshi, Nabin R; Ly, Emma; Viswanathan, Suresh
2017-08-01
To assess the effect of age and test-retest reliability of the intensity response function of the full-field photopic negative response (PhNR) in normal healthy human subjects. Full-field electroretinograms (ERGs) were recorded from one eye of 45 subjects, and 39 of these subjects were tested on two separate days with a Diagnosys Espion System (Lowell, MA, USA). The visual stimuli consisted of brief (<5 ms) red flashes ranging from 0.00625 to 6.4 phot cd.s/m 2 , delivered on a constant 7 cd/m 2 blue background. PhNR amplitudes were measured at its trough from baseline (BT) and from the preceding b-wave peak (PT), and b-wave amplitude was measured at its peak from the preceding a-wave trough or baseline if the a-wave was not present. The intensity response data of all three ERG measures were fitted with a generalized Naka-Rushton function to derive the saturated amplitude (V max ), semisaturation constant (K) and slope (n) parameters. Effect of age on the fit parameters was assessed with linear regression, and test-retest reliability was assessed with the Wilcoxon signed-rank test and Bland-Altman analysis. Holm's correction was applied to account for multiple comparisons. V max of BT was significantly smaller than that of PT and b-wave, and the V max of PT and b-wave was not significantly different from each other. The slope parameter n was smallest for BT and the largest for b-wave and the difference between the slopes of all three measures were statistically significant. Small differences observed in the mean values of K for the different measures did not reach statistical significance. The Wilcoxon signed-rank test indicated no significant differences between the two test visits for any of the Naka-Rushton parameters for the three ERG measures, and the Bland-Altman plots indicated that the mean difference between test and retest measurements of the different fit parameters was close to zero and within 6% of the average of the test and retest values of the respective parameters for all three ERG measurements, indicating minimal bias. While the coefficient of reliability (COR, defined as 1.96 times the standard deviation of the test and retest difference) of each fit parameter was more or less comparable across the three ERG measurements, the %COR (COR normalized to the mean test and retest measures) was generally larger for BT compared to both PT and b-wave for each fit parameter. The Naka-Rushton fit parameters did not show statistically significant changes with age for any of the ERG measures when corrections were applied for multiple comparisons. However, the V max of BT demonstrated a weak correlation with age prior to correction for multiple comparisons, and the effect of age on this parameter showed greater significance when the measure was expressed as a ratio of the V max of b-wave from the same subject. V max of the BT amplitude measure of PhNR at the best was weakly correlated with age. None of the other parameters of the Naka-Rushton fit to the intensity response data of either the PhNR or the b-wave showed any systematic changes with age. The test-retest reliability of the fit parameters for PhNR BT amplitude measurements appears to be lower than those of the PhNR PT and b-wave amplitude measurements.
Holzgrefe, Henry; Ferber, Georg; Champeroux, Pascal; Gill, Michael; Honda, Masaki; Greiter-Wilke, Andrea; Baird, Theodore; Meyer, Olivier; Saulnier, Muriel
2014-01-01
In vivo models have been required to demonstrate relative cardiac safety, but model sensitivity has not been systematically investigated. Cross-species and human translation of repolarization delay, assessed as QT/QTc prolongation, has not been compared employing common methodologies across multiple species and sites. Therefore, the accurate translation of repolarization results within and between preclinical species, and to man, remains problematic. Six pharmaceutical companies entered into an informal consortium designed to collect high-resolution telemetered data in multiple species (dog; n=34, cynomolgus; n=37, minipig; n=12, marmoset; n=14, guinea pig; n=5, and man; n=57). All animals received vehicle and varying doses of moxifloxacin (3-100 mg/kg, p.o.) with telemetered ECGs (≥500 Hz) obtained for 20-24h post-dose. Individual probabilistic QT-RR relationships were derived for each subject. The rate-correction efficacies of the individual (QTca) and generic correction formulae (Bazett, Fridericia, and Van de Water) were objectively assessed as the mean squared slopes of the QTc-RR relationships. Normalized moxifloxacin QTca responses (Veh Δ%/μM) were derived for 1h centered on the moxifloxacin Tmax. All QT-RR ranges demonstrated probabilistic uncertainty; slopes varied distinctly by species where dog and human exhibited the lowest QT rate-dependence, which was much steeper in the cynomolgus and guinea pig. Incorporating probabilistic uncertainty, the normalized QTca-moxifloxacin responses were similarly conserved across all species, including man. The current results provide the first unambiguous evidence that all preclinical in vivo repolarization assays, when accurately modeled and evaluated, yield results that are consistent with the conservation of moxifloxacin-induced QT prolongation across all common preclinical species. Furthermore, these outcomes are directly transferable across all species including man. The consortium results indicate that the implementation of standardized QTc data presentation, QTc reference cycle lengths, and rate-correction coefficients can markedly improve the concordance of preclinical and clinical outcomes in most preclinical species. Copyright © 2013 Elsevier Inc. All rights reserved.
Support for Debugging Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)
2001-01-01
We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify the program execution without changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.
Validation of UARS Microwave Limb Sounder ClO Measurements
NASA Technical Reports Server (NTRS)
Waters, J. W.; Read, W. G.; Froidevaux, L.; Lungu, T. A.; Perun, V. S.; Stachnik, R. A.; Jarnot, R. F.; Cofield, R. E.; Fishbein, E. F.; Flower, D. A.;
1996-01-01
Validation of stratospheric ClO measurements by the Microwave Limb Sounder (MLS) on the Upper Atmosphere Research Satellite (UARS) is described. Credibility of the measurements is established by (1) the consistency of the measured ClO spectral emission line with the retrieved ClO profiles and (2) comparisons of ClO from MLS with that from correlative measurements by balloon-based, ground-based, and aircraft-based instruments. Values of "noise" (random), "scaling" (multiplicative), and "bias" (additive) uncertainties are determined for the Version 3 data, in the first version public release of the known artifacts in these data are identified. Comparisons with correlative measurements indicate agreement to within the combined uncertainties expected for MLS and the other measurements being compared. It is concluded that MLS Version 3 ClO data, with proper consideration of the uncertainties and "quality" parameters produced with these data, can be used for scientific analyses at retrieval surfaces between 46 and 1 hPa (approximately 20-50 km in height). Future work is planned to correct known problems in the data and improve their quality.
Potential Subjective Effectiveness of Active Interior Noise Control in Propeller Airplanes
NASA Technical Reports Server (NTRS)
Powell, Clemans A.; Sullivan, Brenda M.
2000-01-01
Active noise control technology offers the potential for weight-efficient aircraft interior noise reduction, particularly for propeller aircraft. However, there is little information on how passengers respond to this type of interior noise control. This paper presents results of two experiments that use sound quality engineering practices to determine the subjective effectiveness of hypothetical active noise control (ANC) systems in a range of propeller aircraft. The two experiments differed by the type of judgments made by the subjects: pair comparisons based on preference in the first and numerical category scaling of noisiness in the second. Although the results of the two experiments were in general agreement that the hypothetical active control measures improved the interior noise environments, the pair comparison method appears to be more sensitive to subtle changes in the characteristics of the sounds which are related to passenger preference. The reductions in subjective response due to the ANC conditions were predicted with reasonable accuracy by reductions in measured loudness level. Inclusion of corrections for the sound quality characteristics of tonality and fluctuation strength in multiple regression models improved the prediction of the ANC effects.
Relative Debugging of Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)
2002-01-01
We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular, the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify, the program execution with out changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.
A Non-parametric Cutout Index for Robust Evaluation of Identified Proteins*
Serang, Oliver; Paulo, Joao; Steen, Hanno; Steen, Judith A.
2013-01-01
This paper proposes a novel, automated method for evaluating sets of proteins identified using mass spectrometry. The remaining peptide-spectrum match score distributions of protein sets are compared to an empirical absent peptide-spectrum match score distribution, and a Bayesian non-parametric method reminiscent of the Dirichlet process is presented to accurately perform this comparison. Thus, for a given protein set, the process computes the likelihood that the proteins identified are correctly identified. First, the method is used to evaluate protein sets chosen using different protein-level false discovery rate (FDR) thresholds, assigning each protein set a likelihood. The protein set assigned the highest likelihood is used to choose a non-arbitrary protein-level FDR threshold. Because the method can be used to evaluate any protein identification strategy (and is not limited to mere comparisons of different FDR thresholds), we subsequently use the method to compare and evaluate multiple simple methods for merging peptide evidence over replicate experiments. The general statistical approach can be applied to other types of data (e.g. RNA sequencing) and generalizes to multivariate problems. PMID:23292186
ERIC Educational Resources Information Center
Noell, George H.; Gresham, Frank M.
2001-01-01
Describes design logic and potential uses of a variant of the multiple-baseline design. The multiple-baseline multiple-sequence (MBL-MS) consists of multiple-baseline designs that are interlaced with one another and include all possible sequences of treatments. The MBL-MS design appears to be primarily useful for comparison of treatments taking…
ERIC Educational Resources Information Center
Sua, Dangbe Wuo
A study compared correctional adult educators and formal adult educators in terms of their expressed beliefs in the collaborative teaching mode as measured by the Principles of Adult Learning Scale. The sample consisted of 8 correctional adult educators from the Lake Correctional Institution and 10 adult education teachers from the Manatee Area…
Combinatorial neural codes from a mathematical coding theory perspective.
Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L
2013-07-01
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
The ‘Pokemon’ (ZBTB7) Gene: No Evidence of Association with Sporadic Breast Cancer
Salas, Antonio; Vega, Ana; Milne, Roger L.; García-Magariños, Manuel; Ruibal, Álvaro; Benítez, Javier; Carracedo, Ángel
2008-01-01
It has been proposed that the excess of familiar risk associated with breast cancer could be explained by the cumulative effect of multiple weakly predisposing alleles. The transcriptional repressor FBI1, also known as Pokemon, has recently been identified as a critical factor in oncogenesis. This protein is encoded by the ZBTB7 gene. Here we aimed to determine whether polymorphisms in ZBTB7 are associated with breast cancer risk in a sample of cases and controls collected in hospitals from North and Central Spanish patients. We genotyped 15 SNPs in ZBTB7, including the flanking regions, with an average coverage of 1 SNP/2.4 Kb, in 360 sporadic breast cancer cases and 402 controls. Comparison of allele, genotype and haplotype frequencies between cases and controls did not reveal associations using Pearson’s chi-square test and a permutation procedure to correct for multiple test. In this, the first study of the ZBTB7 gene in relation to, sporadic breast cancer, we found no evidence of an association. PMID:21892298
Rangel, Rafael Henrique; Möller, Leona; Sitter, Helmut; Stibane, Tina; Strzelczyk, Adam
2017-11-01
Multiple-choice questions (MCQs) provide useful information about correct and incorrect answers, but they do not offer information about students' confidence. Ninety and another 81 medical students participated each in a curricular neurology multiple-choice exam and indicated their confidence for every single MCQ. Each MCQ had a defined level of potential clinical impact on patient safety (uncritical, risky, harmful). Our first objective was to detect informed (IF), guessed (GU), misinformed (MI), and uninformed (UI) answers. Further, we evaluated whether there were significant differences for confidence at correct and incorrect answers. Then, we explored if clinical impact had a significant influence on students' confidence. There were 1818 IF, 635 GU, 71 MI, and 176 UI answers in exam I and 1453 IF, 613 GU, 92 MI, and 191 UI answers in exam II. Students' confidence was significantly higher for correct than for incorrect answers at both exams (p < 0.001). For exam I, students' confidence was significantly higher for incorrect harmful than for incorrect risky classified MCQs (p = 0.01). At exam II, students' confidence was significantly higher for incorrect harmful than for incorrect benign (p < 0.01) and significantly higher for correct benign than for correct harmful categorized MCQs (p = 0.01). We were pleased to see that there were more informed than guessed, more uninformed than misinformed answers and higher students' confidence for correct than for incorrect answers. Our expectation that students state higher confidence in correct and harmful and lower confidence in incorrect and harmful MCQs could not be confirmed.
Liu, Xiaozheng; Yuan, Zhenming; Guo, Zhongwei; Xu, Dongrong
2015-05-01
Diffusion tensor imaging is widely used for studying neural fiber trajectories in white matter and for quantifying changes in tissue using diffusion properties at each voxel in the brain. To better model the nature of crossing fibers within complex architectures, rather than using a simplified tensor model that assumes only a single fiber direction at each image voxel, a model mixing multiple diffusion tensors is used to profile diffusion signals from high angular resolution diffusion imaging (HARDI) data. Based on the HARDI signal and a multiple tensors model, spherical deconvolution methods have been developed to overcome the limitations of the diffusion tensor model when resolving crossing fibers. The Richardson-Lucy algorithm is a popular spherical deconvolution method used in previous work. However, it is based on a Gaussian distribution, while HARDI data are always very noisy, and the distribution of HARDI data follows a Rician distribution. This current work aims to present a novel solution to address these issues. By simultaneously considering both the Rician bias and neighbor correlation in HARDI data, the authors propose a localized Richardson-Lucy (LRL) algorithm to estimate fiber orientations for HARDI data. The proposed method can simultaneously reduce noise and correct the Rician bias. Mean angular error (MAE) between the estimated Fiber orientation distribution (FOD) field and the reference FOD field was computed to examine whether the proposed LRL algorithm offered any advantage over the conventional RL algorithm at various levels of noise. Normalized mean squared error (NMSE) was also computed to measure the similarity between the true FOD field and the estimated FOD filed. For MAE comparisons, the proposed LRL approach obtained the best results in most of the cases at different levels of SNR and b-values. For NMSE comparisons, the proposed LRL approach obtained the best results in most of the cases at b-value = 3000 s/mm(2), which is the recommended schema for HARDI data acquisition. In addition, the FOD fields estimated by the proposed LRL approach in regions of fiber crossing regions using real data sets also showed similar fiber structures which agreed with common acknowledge in these regions. The novel spherical deconvolution method for improved accuracy in investigating crossing fibers can simultaneously reduce noise and correct Rician bias. With the noise smoothed and bias corrected, this algorithm is especially suitable for estimation of fiber orientations in HARDI data. Experimental results using both synthetic and real imaging data demonstrated the success and effectiveness of the proposed LRL algorithm.
NASA Astrophysics Data System (ADS)
Fishkin, Joshua B.; So, Peter T. C.; Cerussi, Albert E.; Gratton, Enrico; Fantini, Sergio; Franceschini, Maria Angela
1995-03-01
We have measured the optical absorption and scattering coefficient spectra of a multiple-scattering medium (i.e., a biological tissue-simulating phantom comprising a lipid colloid) containing methemoglobin by using frequency-domain techniques. The methemoglobin absorption spectrum determined in the multiple-scattering medium is in excellent agreement with a corrected methemoglobin absorption spectrum obtained from a steady-state spectrophotometer measurement of the optical density of a minimally scattering medium. The determination of the corrected methemoglobin absorption spectrum takes into account the scattering from impurities in the methemoglobin solution containing no lipid colloid. Frequency-domain techniques allow for the separation of the absorbing from the scattering properties of multiple-scattering media, and these techniques thus provide an absolute
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-23
... treatment of multiple step plans for the acquisition of stock and CERTs involving members of a consolidated... language ``Service, 1111 Constitution Avenue NW.,'' is corrected to read ``Service, 1111 Constitution... from the bottom of the page, the language ``return group; (4) application of these'' is corrected to...
ERIC Educational Resources Information Center
Pfaffel, Andreas; Spiel, Christiane
2016-01-01
Approaches to correcting correlation coefficients for range restriction have been developed under the framework of large sample theory. The accuracy of missing data techniques for correcting correlation coefficients for range restriction has thus far only been investigated with relatively large samples. However, researchers and evaluators are…
Mi, Zhibao; Novitzky, Dimitri; Collins, Joseph F; Cooper, David KC
2015-01-01
The management of brain-dead organ donors is complex. The use of inotropic agents and replacement of depleted hormones (hormonal replacement therapy) is crucial for successful multiple organ procurement, yet the optimal hormonal replacement has not been identified, and the statistical adjustment to determine the best selection is not trivial. Traditional pair-wise comparisons between every pair of treatments, and multiple comparisons to all (MCA), are statistically conservative. Hsu’s multiple comparisons with the best (MCB) – adapted from the Dunnett’s multiple comparisons with control (MCC) – has been used for selecting the best treatment based on continuous variables. We selected the best hormonal replacement modality for successful multiple organ procurement using a two-step approach. First, we estimated the predicted margins by constructing generalized linear models (GLM) or generalized linear mixed models (GLMM), and then we applied the multiple comparison methods to identify the best hormonal replacement modality given that the testing of hormonal replacement modalities is independent. Based on 10-year data from the United Network for Organ Sharing (UNOS), among 16 hormonal replacement modalities, and using the 95% simultaneous confidence intervals, we found that the combination of thyroid hormone, a corticosteroid, antidiuretic hormone, and insulin was the best modality for multiple organ procurement for transplantation. PMID:25565890
Comparisons of Aquarius Measurements over Oceans with Radiative Transfer Models at L-Band
NASA Technical Reports Server (NTRS)
Dinnat, E.; LeVine, D.; Abraham, S.; DeMattheis, P.; Utku, C.
2012-01-01
The Aquarius/SAC-D spacecraft includes three L-band (1.4 GHz) radiometers dedicated to measuring sea surface salinity. It was launched in June 2011 by NASA and CONAE (Argentine space agency). We report detailed comparisons of Aquarius measurements with radiative transfer model predictions. These comparisons are used as part of the initial assessment of Aquarius data and to estimate the radiometer calibration bias and stability. Comparisons are also being performed to assess the performance of models used in the retrieval algorithm for correcting the effect of various sources of geophysical "noise" (e.g. Faraday rotation, surface roughness). Such corrections are critical in bringing the error in retrieved salinity down to the required 0.2 practical salinity unit on monthly global maps at 150 km by 150 km resolution.
Correcting for Indirect Range Restriction in Meta-Analysis: Testing a New Meta-Analytic Procedure
ERIC Educational Resources Information Center
Le, Huy; Schmidt, Frank L.
2006-01-01
Using computer simulation, the authors assessed the accuracy of J. E. Hunter, F. L. Schmidt, and H. Le's (2006) procedure for correcting for indirect range restriction, the most common type of range restriction, in comparison with the conventional practice of applying the Thorndike Case II correction for direct range restriction. Hunter et…
Todd A. Schroeder; Warren B. Cohen; Conghe Song; Morton J. Canty; Zhiqiang Yang
2006-01-01
Detecting and characterizing continuous changes in early forest succession using multi-temporal satellite imagery requires atmospheric correction procedures that are both operationally reliable, and that result in comparable units (e-g., surface reflectance). This paper presents a comparison of five atmospheric correction methods (2 relative, 3 absolute) used to...
Takeuchi, Akihiko; Yamamoto, Norio; Shirai, Toshiharu; Nishida, Hideji; Hayashi, Katsuhiro; Watanabe, Koji; Miwa, Shinji; Tsuchiya, Hiroyuki
2015-12-07
In a previous report, we described a method of reconstruction using tumor-bearing autograft treated by liquid nitrogen for malignant bone tumor. Here we present the first case of bone deformity correction following a tumor-bearing frozen autograft via three-dimensional computerized reconstruction after multiple surgeries. A 16-year-old female student presented with pain in the left lower leg and was diagnosed with a low-grade central tibial osteosarcoma. Surgical bone reconstruction was performed using a tumor-bearing frozen autograft. Bone union was achieved at 7 months after the first surgical procedure. However, local tumor recurrence and lung metastases occurred 2 years later, at which time a second surgical procedure was performed. Five years later, the patient developed a 19° varus deformity and underwent a third surgical procedure, during which an osteotomy was performed using the Taylor Spatial Frame three-dimensional external fixation technique. A fourth corrective surgical procedure was performed in which internal fixation was achieved with a locking plate. Two years later, and 10 years after the initial diagnosis of tibial osteosarcoma, the bone deformity was completely corrected, and the patient's limb function was good. We present the first report in which a bone deformity due to a primary osteosarcoma was corrected using a tumor-bearing frozen autograft, followed by multiple corrective surgical procedures that included osteotomy, three-dimensional external fixation, and internal fixation.
DNA repair variants and breast cancer risk.
Grundy, Anne; Richardson, Harriet; Schuetz, Johanna M; Burstyn, Igor; Spinelli, John J; Brooks-Wilson, Angela; Aronson, Kristan J
2016-05-01
A functional DNA repair system has been identified as important in the prevention of tumour development. Previous studies have hypothesized that common polymorphisms in DNA repair genes could play a role in breast cancer risk and also identified the potential for interactions between these polymorphisms and established breast cancer risk factors such as physical activity. Associations with breast cancer risk for 99 single nucleotide polymorphisms (SNPs) from genes in ten DNA repair pathways were examined in a case-control study including both Europeans (644 cases, 809 controls) and East Asians (299 cases, 160 controls). Odds ratios in both additive and dominant genetic models were calculated separately for participants of European and East Asian ancestry using multivariate logistic regression. The impact of multiple comparisons was assessed by correcting for the false discovery rate within each DNA repair pathway. Interactions between several breast cancer risk factors and DNA repair SNPs were also evaluated. One SNP (rs3213282) in the gene XRCC1 was associated with an increased risk of breast cancer in the dominant model of inheritance following adjustment for the false discovery rate (P < 0.05), although no associations were observed for other DNA repair SNPs. Interactions of six SNPs in multiple DNA repair pathways with physical activity were evident prior to correction for FDR, following which there was support for only one of the interaction terms (P < 0.05). No consistent associations between variants in DNA repair genes and breast cancer risk or their modification by breast cancer risk factors were observed. © 2016 Wiley Periodicals, Inc.
Solomon, Gary S; Kuhn, Andrew W; Zuckerman, Scott L; Casson, Ira R; Viano, David C; Lovell, Mark R; Sills, Allen K
2016-05-01
A recent study found that an earlier age of first exposure (AFE) to tackle football was associated with long-term neurocognitive impairment in retired National Football League (NFL) players. To assess the association between years of exposure to pre-high school football (PreYOE) and neuroradiological, neurological, and neuropsychological outcome measures in a different sample of retired NFL players. Cross-sectional study; Level of evidence, 3. Forty-five former NFL players were included in this study. All participants prospectively completed extensive history taking, a neurological examination, brain magnetic resonance imaging, and a comprehensive battery of neuropsychological tests. To measure the associations between PreYOE and these outcome measures, multiple regression models were utilized while controlling for several covariates. After applying a Bonferroni correction for multiple comparisons, none of the neurological, neuroradiological, or neuropsychological outcome measures yielded a significant relationship with PreYOE. A second Bonferroni-corrected analysis of a subset of these athletes with self-reported learning disability yielded no significant relationships on paper-and-pencil neurocognitive tests but did result in a significant association between learning disability and computerized indices of visual motor speed and reaction time. The current study failed to replicate the results of a prior study, which concluded that an earlier AFE to tackle football might result in long-term neurocognitive deficits. In 45 retired NFL athletes, there were no associations between PreYOE and neuroradiological, neurological, and neuropsychological outcome measures. © 2016 The Author(s).
NASA Astrophysics Data System (ADS)
Gerzen, T.; Minkwitz, D.
2016-01-01
The accuracy and availability of satellite-based applications like GNSS positioning and remote sensing crucially depends on the knowledge of the ionospheric electron density distribution. The tomography of the ionosphere is one of the major tools to provide link specific ionospheric corrections as well as to study and monitor physical processes in the ionosphere. In this paper, we introduce a simultaneous multiplicative column-normalized method (SMART) for electron density reconstruction. Further, SMART+ is developed by combining SMART with a successive correction method. In this way, a balancing between the measurements of intersected and not intersected voxels is realised. The methods are compared with the well-known algebraic reconstruction techniques ART and SART. All the four methods are applied to reconstruct the 3-D electron density distribution by ingestion of ground-based GNSS TEC data into the NeQuick model. The comparative case study is implemented over Europe during two periods of the year 2011 covering quiet to disturbed ionospheric conditions. In particular, the performance of the methods is compared in terms of the convergence behaviour and the capability to reproduce sTEC and electron density profiles. For this purpose, independent sTEC data of four IGS stations and electron density profiles of four ionosonde stations are taken as reference. The results indicate that SMART significantly reduces the number of iterations necessary to achieve a predefined accuracy level. Further, SMART+ decreases the median of the absolute sTEC error up to 15, 22, 46 and 67 % compared to SMART, SART, ART and NeQuick respectively.
Grey matter volume loss is associated with specific clinical motor signs in Huntington's disease.
Coppen, Emma M; Jacobs, Milou; van den Berg-Huysmans, Annette A; van der Grond, Jeroen; Roos, Raymund A C
2018-01-01
Motor disturbances are clinical hallmarks of Huntington's disease (HD) and involve chorea, dystonia, hypokinesia and visuomotor dysfunction. Investigating the association between specific motor signs and different regional volumes is important to understand the heterogeneity of HD. To investigate the motor phenotype of HD and associations with subcortical and cortical grey matter volume loss. Structural T1-weighted MRI scans of 79 HD patients and 30 healthy controls were used to calculate volumes of seven subcortical structures including the nucleus accumbens, hippocampus, thalamus, caudate nucleus, putamen, pallidum and amygdala. Multiple linear regression analyses, corrected for age, gender, CAG, MRI scan protocol and normalized brain volume, were performed to assess the relationship between subcortical volumes and different motor subdomains (i.e. eye movements, chorea, dystonia, hypokinesia/rigidity and gait/balance). Voxel-based morphometry analysis was used to investigate the relationship between cortical volume changes and motor signs. Subcortical volume loss of the accumbens nucleus, caudate nucleus, putamen, and pallidum were associated with higher chorea scores. No other subcortical region was significantly associated with motor symptoms after correction for multiple comparisons. Voxel-based cortical grey matter volume reductions in occipital regions were related with an increase in eye movement scores. In HD, chorea is mainly associated with subcortical volume loss, while eye movements are more related to cortical volume loss. Both subcortical and cortical degeneration has an impact on motor impairment in HD. This implies that there is a widespread contribution of different brain regions resulting in the clinical motor presentation seen in HD patients. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Kitzis, J. L.; Kitzis, S. N.
1979-01-01
The brightness temperature data produced by the SMMR Antenna Pattern Correction algorithm are evaluated. The evaluation consists of: (1) a direct comparison of the outputs of the interim, cross, and nominal APC modes; (2) a refinement of the previously determined cos beta estimates; and (3) a comparison of the world brightness temperature (T sub B) map with actual SMMR measurements.
Multiple testing and power calculations in genetic association studies.
So, Hon-Cheong; Sham, Pak C
2011-01-01
Modern genetic association studies typically involve multiple single-nucleotide polymorphisms (SNPs) and/or multiple genes. With the development of high-throughput genotyping technologies and the reduction in genotyping cost, investigators can now assay up to a million SNPs for direct or indirect association with disease phenotypes. In addition, some studies involve multiple disease or related phenotypes and use multiple methods of statistical analysis. The combination of multiple genetic loci, multiple phenotypes, and multiple methods of evaluating associations between genotype and phenotype means that modern genetic studies often involve the testing of an enormous number of hypotheses. When multiple hypothesis tests are performed in a study, there is a risk of inflation of the type I error rate (i.e., the chance of falsely claiming an association when there is none). Several methods for multiple-testing correction are in popular use, and they all have strengths and weaknesses. Because no single method is universally adopted or always appropriate, it is important to understand the principles, strengths, and weaknesses of the methods so that they can be applied appropriately in practice. In this article, we review the three principle methods for multiple-testing correction and provide guidance for calculating statistical power.
Reducing representativeness and sampling errors in radio occultation-radiosonde comparisons
NASA Astrophysics Data System (ADS)
Gilpin, Shay; Rieckh, Therese; Anthes, Richard
2018-05-01
Radio occultation (RO) and radiosonde (RS) comparisons provide a means of analyzing errors associated with both observational systems. Since RO and RS observations are not taken at the exact same time or location, temporal and spatial sampling errors resulting from atmospheric variability can be significant and inhibit error analysis of the observational systems. In addition, the vertical resolutions of RO and RS profiles vary and vertical representativeness errors may also affect the comparison. In RO-RS comparisons, RO observations are co-located with RS profiles within a fixed time window and distance, i.e. within 3-6 h and circles of radii ranging between 100 and 500 km. In this study, we first show that vertical filtering of RO and RS profiles to a common vertical resolution reduces representativeness errors. We then test two methods of reducing horizontal sampling errors during RO-RS comparisons: restricting co-location pairs to within ellipses oriented along the direction of wind flow rather than circles and applying a spatial-temporal sampling correction based on model data. Using data from 2011 to 2014, we compare RO and RS differences at four GCOS Reference Upper-Air Network (GRUAN) RS stations in different climatic locations, in which co-location pairs were constrained to a large circle ( ˜ 666 km radius), small circle ( ˜ 300 km radius), and ellipse parallel to the wind direction ( ˜ 666 km semi-major axis, ˜ 133 km semi-minor axis). We also apply a spatial-temporal sampling correction using European Centre for Medium-Range Weather Forecasts Interim Reanalysis (ERA-Interim) gridded data. Restricting co-locations to within the ellipse reduces root mean square (RMS) refractivity, temperature, and water vapor pressure differences relative to RMS differences within the large circle and produces differences that are comparable to or less than the RMS differences within circles of similar area. Applying the sampling correction shows the most significant reduction in RMS differences, such that RMS differences are nearly identical to the sampling correction regardless of the geometric constraints. We conclude that implementing the spatial-temporal sampling correction using a reliable model will most effectively reduce sampling errors during RO-RS comparisons; however, if a reliable model is not available, restricting spatial comparisons to within an ellipse parallel to the wind flow will reduce sampling errors caused by horizontal atmospheric variability.
CANDELS Visual Classifications: Scheme, Data Release, and First Results
NASA Technical Reports Server (NTRS)
Kartaltepe, Jeyhan S.; Mozena, Mark; Kocevski, Dale; McIntosh, Daniel H.; Lotz, Jennifer; Bell, Eric F.; Faber, Sandy; Ferguson, Henry; Koo, David; Bassett, Robert;
2014-01-01
We have undertaken an ambitious program to visually classify all galaxies in the five CANDELS fields down to H <24.5 involving the dedicated efforts of 65 individual classifiers. Once completed, we expect to have detailed morphological classifications for over 50,000 galaxies spanning 0 < z < 4 over all the fields. Here, we present our detailed visual classification scheme, which was designed to cover a wide range of CANDELS science goals. This scheme includes the basic Hubble sequence types, but also includes a detailed look at mergers and interactions, the clumpiness of galaxies, k-corrections, and a variety of other structural properties. In this paper, we focus on the first field to be completed - GOODS-S, which has been classified at various depths. The wide area coverage spanning the full field (wide+deep+ERS) includes 7634 galaxies that have been classified by at least three different people. In the deep area of the field, 2534 galaxies have been classified by at least five different people at three different depths. With this paper, we release to the public all of the visual classifications in GOODS-S along with the Perl/Tk GUI that we developed to classify galaxies. We present our initial results here, including an analysis of our internal consistency and comparisons among multiple classifiers as well as a comparison to the Sersic index. We find that the level of agreement among classifiers is quite good and depends on both the galaxy magnitude and the galaxy type, with disks showing the highest level of agreement and irregulars the lowest. A comparison of our classifications with the Sersic index and restframe colors shows a clear separation between disk and spheroid populations. Finally, we explore morphological k-corrections between the V-band and H-band observations and find that a small fraction (84 galaxies in total) are classified as being very different between these two bands. These galaxies typically have very clumpy and extended morphology or are very faint in the V-band.
Injuries in martial arts: a comparison of five styles.
Zetaruk, M N; Violán, M A; Zurakowski, D; Micheli, L J
2005-01-01
To compare five martial arts with respect to injury outcomes. A one year retrospective cohort was studied using an injury survey. Data on 263 martial arts participants (Shotokan karate, n = 114; aikido, n = 47; tae kwon do, n = 49; kung fu, n = 39; tai chi, n = 14) were analysed. Predictor variables included age, sex, training frequency (
Final report of the APMP water flow key comparison: APMP.M.FF-K1
NASA Astrophysics Data System (ADS)
Lee, Kwang-Bock; Chun, Sejong; Terao, Yoshiya; Thai, Nguyen Hong; Tsair Yang, Cheng; Tao, Meng; Gutkin, Mikhail B.
2011-01-01
The key comparison, APMP.M.FF-K1, was undertaken by APMP/TCFF, the Technical Committee for Fluid Flow (TCFF) under the Asia Pacific Metrology Program (APMP). One objective of the key comparison was to demonstrate the degree of equivalence among six participating laboratories (KRISS, NMIJ, VMI, CMS, NIM and VNIIM) in water flow rate metrology by comparing the results with the key comparison reference value (KCRV) determined from the CCM.FF-K1 key comparison. The other objective of this key comparison was to provide supporting evidence for the calibration and measurement capabilities (CMCs), which had been declared by the participating laboratories during this key comparison. The Transfer Standard Package (TSP) was a Coriolis mass flowmeter, which had been used in the CCM.FF-K1 key comparison. Because the K-factors in the APMP.M.FF-K1 key comparison were slightly lower than the K-factors of the CCM.FF-K1 key comparison due to long-term drifts of the TSP, a correction value D was introduced. The value of D was given by a weighted sum between two link laboratories (NMIJ and KRISS), which participated in both the CCM.FF-K1 and the APMP.M.FF-K1 key comparisons. By this correction, the K-factors were laid between 12.004 and 12.017 at either low (Re = 254 000) or high (Re = 561 000) flow rates. Most of the calibration data were within expected uncertainty bounds. However, some data showed undulations, which gave large fluctuations of the metering factor at Re = 561 000. Calculation of degrees of equivalence showed that all the participating laboratories had deviations between -0.009 and 0.007 pulses/kg from the CCM.FF-K1 KCRV at either the low or the high flow rates. In case of En calculation, all the participating laboratories showed values less than 1, indicating that the corrected K-factors of all the laboratories were equivalent with the KCRV at both Re = 254 000 and 561 000. When the corrected K-factors from two participating laboratories were compared, all the numbers of equivalence showed values less than 1, indicating equivalence. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
Comparison of Different Attitude Correction Models for ZY-3 Satellite Imagery
NASA Astrophysics Data System (ADS)
Song, Wenping; Liu, Shijie; Tong, Xiaohua; Niu, Changling; Ye, Zhen; Zhang, Han; Jin, Yanmin
2018-04-01
ZY-3 satellite, launched in 2012, is the first civilian high resolution stereo mapping satellite of China. This paper analyzed the positioning errors of ZY-3 satellite imagery and conducted compensation for geo-position accuracy improvement using different correction models, including attitude quaternion correction, attitude angle offset correction, and attitude angle linear correction. The experimental results revealed that there exist systematic errors with ZY-3 attitude observations and the positioning accuracy can be improved after attitude correction with aid of ground controls. There is no significant difference between the results of attitude quaternion correction method and the attitude angle correction method. However, the attitude angle offset correction model produced steady improvement than the linear correction model when limited ground control points are available for single scene.
Is multiple-sequence alignment required for accurate inference of phylogeny?
Höhl, Michael; Ragan, Mark A
2007-04-01
The process of inferring phylogenetic trees from molecular sequences almost always starts with a multiple alignment of these sequences but can also be based on methods that do not involve multiple sequence alignment. Very little is known about the accuracy with which such alignment-free methods recover the correct phylogeny or about the potential for increasing their accuracy. We conducted a large-scale comparison of ten alignment-free methods, among them one new approach that does not calculate distances and a faster variant of our pattern-based approach; all distance-based alignment-free methods are freely available from http://www.bioinformatics.org.au (as Python package decaf+py). We show that most methods exhibit a higher overall reconstruction accuracy in the presence of high among-site rate variation. Under all conditions that we considered, variants of the pattern-based approach were significantly better than the other alignment-free methods. The new pattern-based variant achieved a speed-up of an order of magnitude in the distance calculation step, accompanied by a small loss of tree reconstruction accuracy. A method of Bayesian inference from k-mers did not improve on classical alignment-free (and distance-based) methods but may still offer other advantages due to its Bayesian nature. We found the optimal word length k of word-based methods to be stable across various data sets, and we provide parameter ranges for two different alphabets. The influence of these alphabets was analyzed to reveal a trade-off in reconstruction accuracy between long and short branches. We have mapped the phylogenetic accuracy for many alignment-free methods, among them several recently introduced ones, and increased our understanding of their behavior in response to biologically important parameters. In all experiments, the pattern-based approach emerged as superior, at the expense of higher resource consumption. Nonetheless, no alignment-free method that we examined recovers the correct phylogeny as accurately as does an approach based on maximum-likelihood distance estimates of multiply aligned sequences.
The Rise and Fall of Boot Camps: A Case Study in Common-Sense Corrections
ERIC Educational Resources Information Center
Cullen, Francis T.; Blevins, Kristie R.; Trager, Jennifer S.; Gendreau, Paul
2005-01-01
"Common sense" is often used as a powerful rationale for implementing correctional programs that have no basis in criminology and virtually no hope of reducing recidivism. Within this context, we undertake a case study in "common-sense' corrections by showing how the rise of boot camps, although having multiple causes, was ultimately legitimized…
Is the NIHSS Certification Process Too Lenient?
Hills, Nancy K.; Josephson, S. Andrew; Lyden, Patrick D.; Johnston, S. Claiborne
2009-01-01
Background and Purpose The National Institutes of Health Stroke Scale (NIHSS) is a widely used measure of neurological function in clinical trials and patient assessment; inter-rater scoring variability could impact communications and trial power. The manner in which the rater certification test is scored yields multiple correct answers that have changed over time. We examined the range of possible total NIHSS scores from answers given in certification tests by over 7,000 individual raters who were certified. Methods We analyzed the results of all raters who completed one of two standard multiple-patient videotaped certification examinations between 1998 and 2004. The range for the correct score, calculated using NIHSS ‘correct answers’, was determined for each patient. The distribution of scores derived from those who passed the certification test then was examined. Results A total of 6,268 raters scored 5 patients on Test 1; 1,240 scored 6 patients on Test 2. Using a National Stroke Association (NSA) answer key, we found that correct total scores ranged from 2 correct scores to as many as 12 different correct total scores. Among raters who achieved a passing score and were therefore qualified to administer the NIHSS, score distributions were even wider, with 1 certification patient receiving 18 different correct total scores. Conclusions Allowing multiple acceptable answers for questions on the NIHSS certification test introduces scoring variability. It seems reasonable to assume that the wider the range of acceptable answers in the certification test, the greater the variability in the performance of the test in trials and clinical practice by certified examiners. Greater consistency may be achieved by deriving a set of ‘best’ answers through expert consensus on all questions where this is possible, then teaching raters how to derive these answers using a required interactive training module. PMID:19295205
Orbit correction in a linear nonscaling fixed field alternating gradient accelerator
Kelliher, D. J.; Machida, S.; Edmonds, C. S.; ...
2014-11-20
In a linear non-scaling FFAG the large natural chromaticity of the machine results in a betatron tune that varies by several integers over the momentum range. In addition, orbit correction is complicated by the consequent variation of the phase advance between lattice elements. Here we investigate how the correction of multiple closed orbit harmonics allows correction of both the COD and the accelerated orbit distortion over the momentum range.
Cartier, Vanessa; Inan, Cigdem; Zingg, Walter; Delhumeau, Cecile; Walder, Bernard; Savoldelli, Georges L
2016-08-01
Multimodal educational interventions have been shown to improve short-term competency in, and knowledge of central venous catheter (CVC) insertion. To evaluate the effectiveness of simulation-based medical education training in improving short and long-term competency in, and knowledge of CVC insertion. Before and after intervention study. University Geneva Hospital, Geneva, Switzerland, between May 2008 and January 2012. Residents in anaesthesiology aware of the Seldinger technique for vascular puncture. Participants attended a half-day course on CVC insertion. Learning objectives included work organization, aseptic technique and prevention of CVC complications. CVC insertion competency was tested pretraining, posttraining and then more than 2 years after training (sustainability phase). The primary study outcome was competency as measured by a global rating scale of technical skills, a hand hygiene compliance score and a checklist compliance score. Secondary outcome was knowledge as measured by a standardised pretraining and posttraining multiple-choice questionnaire. Statistical analyses were performed using paired Student's t test or Wilcoxon signed-rank test. Thirty-seven residents were included; 18 were tested in the sustainability phase (on average 34 months after training). The average global rating of skills was 23.4 points (±SD 4.08) before training, 32.2 (±4.51) after training (P < 0.001 for comparison with pretraining scores) and 26.5 (±5.34) in the sustainability phase (P = 0.040 for comparison with pretraining scores). The average hand hygiene compliance score was 2.8 (±1.0) points before training, 5.0 (±1.04) after training (P < 0.001 for comparison with pretraining scores) and 3.7 (±1.75) in the sustainability phase (P = 0.038 for comparison with pretraining scores). The average checklist compliance was 14.9 points (±2.3) before training, 19.9 (±1.06) after training (P < 0.001 for comparison with pretraining scores) and 17.4 (±1.41) (P = 0.002 for comparison with pretraining scores). The percentage of correct answers in the multiple-choice questionnaire increased from 76.0% (±7.9) before training to 87.7% (±4.4) after training (P < 0.001). Simulation-based medical education training was effective in improving short and long-term competency in, and knowledge of CVC insertion.
Implementation and performance of shutterless uncooled micro-bolometer cameras
NASA Astrophysics Data System (ADS)
Das, J.; de Gaspari, D.; Cornet, P.; Deroo, P.; Vermeiren, J.; Merken, P.
2015-06-01
A shutterless algorithm is implemented into the Xenics LWIR thermal cameras and modules. Based on a calibration set and a global temperature coefficient the optimal non-uniformity correction is calculated onboard of the camera. The limited resources in the camera require a compact algorithm, hence the efficiency of the coding is important. The performance of the shutterless algorithm is studied by a comparison of the residual non-uniformity (RNU) and signal-to-noise ratio (SNR) between the shutterless and shuttered correction algorithm. From this comparison we conclude that the shutterless correction is only slightly less performant compared to the standard shuttered algorithm, making this algorithm very interesting for thermal infrared applications where small weight and size, and continuous operation are important.
Can rats solve a simple version of the traveling salesman problem?
Bures, J; Buresová, O; Nerad, L
1992-12-31
Whereas correct tours through the radial arm maze are almost equally long, free choice mazes with multiple goals scattered in an open field allow the animal to select the shortest one from a multitude of correct tours. Thirteen rats were trained (at 10 trials per day) to visit an array of cylindrical feeders in an open field (40 x 100 cm) with reward available only when visiting the last feeder of the set. In Expt. 1 with eight feeders arranged in five different configurations the rats made after 10 days of training 1 error in the first 8 choices and incidence of errorless trials was about 20%. In Expt. 2. the use of six feeders in a rectangular (A) or double triangle (B) configuration increased the incidence of errorless trials to 60%. Expt. 3 showed that performance in the 6-feeder maze was significantly impaired by 6 mg/kg ketamine or 0.25 mg/kg scopolamine but not by lower dosages of these drugs. Tours generated on errorless trials (each feeder visited only once) during 10 days of Expt. 2 were analyzed. Six places can be visited in 6! = 720 different closed tours the lengths of which (in arbitrary units) range from 6.00 to 10.12 for A and from 6.83 to 10.47 for B. Whereas random generation of correct routes yielded only 5% of the shortest tours, they were clearly preferred by rats (41% in A and 45% in B). The apparent proficiency of rats in this optimization problem is probably not due to cognitive comparison of the possible correct routes but rather to following a simple rule 'Always go to the nearest not yet visited feeder'.
NASA Astrophysics Data System (ADS)
Wahl, Daniel J.; Zhang, Pengfei; Jian, Yifan; Bonora, Stefano; Sarunic, Marinko V.; Zawadzki, Robert J.
2017-02-01
Adaptive optics (AO) is essential for achieving diffraction limited resolution in large numerical aperture (NA) in-vivo retinal imaging in small animals. Cellular-resolution in-vivo imaging of fluorescently labeled cells is highly desirable for studying pathophysiology in animal models of retina diseases in pre-clinical vision research. Currently, wavefront sensor-based (WFS-based) AO is widely used for retinal imaging and has demonstrated great success. However, the performance can be limited by several factors including common path errors, wavefront reconstruction errors and an ill-defined reference plane on the retina. Wavefront sensorless (WFS-less) AO has the advantage of avoiding these issues at the cost of algorithmic execution time. We have investigated WFS-less AO on a fluorescence scanning laser ophthalmoscopy (fSLO) system that was originally designed for WFS-based AO. The WFS-based AO uses a Shack-Hartmann WFS and a continuous surface deformable mirror in a closed-loop control system to measure and correct for aberrations induced by the mouse eye. The WFS-less AO performs an open-loop modal optimization with an image quality metric. After WFS-less AO aberration correction, the WFS was used as a control of the closed-loop WFS-less AO operation. We can easily switch between WFS-based and WFS-less control of the deformable mirror multiple times within an imaging session for the same mouse. This allows for a direct comparison between these two types of AO correction for fSLO. Our results demonstrate volumetric AO-fSLO imaging of mouse retinal cells labeled with GFP. Most significantly, we have analyzed and compared the aberration correction results for WFS-based and WFS-less AO imaging.
Characterizations of double pulsing in neutron multiplicity and coincidence counting systems
Koehler, Katrina E.; Henzl, Vladimir; Croft, Stephen; ...
2016-06-29
Passive neutron coincidence/multiplicity counters are subject to non-ideal behavior, such as double pulsing and dead time. It has been shown in the past that double-pulsing exhibits a distinct signature in a Rossi-alpha distribution, which is not readily noticed using traditional Multiplicity Shift Register analysis. But, it has been assumed that the use of a pre-delay in shift register analysis removes any effects of double pulsing. Here, we use high-fidelity simulations accompanied by experimental measurements to study the effects of double pulsing on multiplicity rates. By exploiting the information from the double pulsing signature peak observable in the Rossi-alpha distribution, themore » double pulsing fraction can be determined. Algebraic correction factors for the multiplicity rates in terms of the double pulsing fraction have been developed. We also discuss the role of these corrections across a range of scenarios.« less
Rapid Vision Correction by Special Operations Forces.
Reynolds, Mark E
This report describes a rapid method of vision correction used by Special Operations Medics in multiple operational engagements. Between 2011 and 2015, Special Operations Medics used an algorithm- driven refraction technique. A standard block of instruction was provided to the medics, along with a packaged kit. The technique was used in multiple operational engagements with host nation military and civilians. Data collected for program evaluation were later analyzed to assess the utility of the technique. Glasses were distributed to 230 patients with complaints of either decreased distance or near (reading). Most patients (84%) with distance complaints achieved corrected binocular vision of 20/40 or better, and 97% of patients with near-vision complaints achieved corrected near-binocular vision of 20/40 or better. There was no statistically significant difference between the percentages of patients achieving 20/40 when medics used the technique under direct supervision versus independent use. A basic refraction technique using a designed kit allows for meaningful improvement in distance and/or near vision at austere locations. Special Operations Medics can leverage this approach after specific training with minimal time commitment. It can serve as a rapid, effective intervention with multiple applications in diverse operational environments. 2017.
Stray-Light Correction of the Marine Optical Buoy
NASA Technical Reports Server (NTRS)
Brown, Steven W.; Johnson, B. Carol; Flora, Stephanie J.; Feinholz, Michael E.; Yarbrough, Mark A.; Barnes, Robert A.; Kim, Yong Sung; Lykke, Keith R.; Clark, Dennis K.
2003-01-01
In ocean-color remote sensing, approximately 90% of the flux at the sensor originates from atmospheric scattering, with the water-leaving radiance contributing the remaining 10% of the total flux. Consequently, errors in the measured top-of-the atmosphere radiance are magnified a factor of 10 in the determination of water-leaving radiance. Proper characterization of the atmosphere is thus a critical part of the analysis of ocean-color remote sensing data. It has always been necessary to calibrate the ocean-color satellite sensor vicariously, using in situ, ground-based results, independent of the status of the pre-flight radiometric calibration or the utility of on-board calibration strategies. Because the atmosphere contributes significantly to the measured flux at the instrument sensor, both the instrument and the atmospheric correction algorithm are simultaneously calibrated vicariously. The Marine Optical Buoy (MOBY), deployed in support of the Earth Observing System (EOS) since 1996, serves as the primary calibration station for a variety of ocean-color satellite instruments, including the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), the Moderate Resolution Imaging Spectroradiometer (MODIS), the Japanese Ocean Color Temperature Scanner (OCTS) , and the French Polarization and Directionality of the Earth's Reflectances (POLDER). MOBY is located off the coast of Lanai, Hawaii. The site was selected to simplify the application of the atmospheric correction algorithms. Vicarious calibration using MOBY data allows for a thorough comparison and merger of ocean-color data from these multiple sensors.
Convolution- and Fourier-transform-based reconstructors for pyramid wavefront sensor.
Shatokhina, Iuliia; Ramlau, Ronny
2017-08-01
In this paper, we present two novel algorithms for wavefront reconstruction from pyramid-type wavefront sensor data. An overview of the current state-of-the-art in the application of pyramid-type wavefront sensors shows that the novel algorithms can be applied in various scientific fields such as astronomy, ophthalmology, and microscopy. Assuming a computationally very challenging setting corresponding to the extreme adaptive optics (XAO) on the European Extremely Large Telescope, we present the results of the performed end-to-end simulations and compare the achieved AO correction quality (in terms of the long-exposure Strehl ratio) to other methods, such as matrix-vector multiplication and preprocessed cumulative reconstructor with domain decomposition. Also, we provide a comparison in terms of applicability and computational complexity and closed-loop performance of our novel algorithms to other methods existing for this type of sensor.
Examination of the high-frequency capability of carbon nanotube FETs
NASA Astrophysics Data System (ADS)
Pulfrey, David L.; Chen, Li
2008-09-01
New results are added to a recent critique of the high-frequency performance of carbon nanotube field-effect transistors (CNFETs). On the practical side, reduction of the number of metallic tubes in CNFETs fashioned from multiple nanotubes has allowed the measured fT to be increased to 30 GHz. On the theoretical side, the opinion that the band-structure-determined velocity limits the high-frequency performance has been reinforced by corrections to recent simulation results for doped-contact CNFETs, and by the ruling out of the possibility of favourable image-charge effects. Inclusion in the simulations of the features of finite gate-metal thickness and source/drain contact resistance has given an indication of likely practical values for fT. A meaningful comparison between CNFETs with doped-contacts and metallic contacts has been made.
NASA Astrophysics Data System (ADS)
Stroud, Mary W.
This investigation, rooted in both chemistry and education, considers outcomes occurring in a small-scale study in which concept mapping was used as an instructional intervention in an undergraduate calorimetry laboratory. A quasi-experimental, multiple-methods approach was employed since the research questions posed in this study warranted the use of both qualitative and quantitative perspectives and evaluations. For the intervention group of students, a convenience sample, post-lab concept maps, written discussions, quiz responses and learning surveys were characterized and evaluated. Archived quiz responses for non-intervention students were also analyzed for comparison. Students uniquely constructed individual concept maps containing incorrect, conceptually correct and "scientifically thin" calorimetry characterizations. Students more greatly emphasized mathematical relationships and equations utilized during the calorimetry experiment; the meaning of calorimetry concepts was demonstrated to a lesser extent.
Wang, Xiao-Lan; Zhan, Ting-Ting; Zhan, Xian-Cheng; Tan, Xiao-Ying; Qu, Xiao-You; Wang, Xin-Yue; Li, Cheng-Rong
2014-01-01
The osmotic pressure of ammonium sulfate solutions has been measured by the well-established freezing point osmometry in dilute solutions and we recently reported air humidity osmometry in a much wider range of concentration. Air humidity osmometry cross-validated the theoretical calculations of osmotic pressure based on the Pitzer model at high concentrations by two one-sided test (TOST) of equivalence with multiple testing corrections, where no other experimental method could serve as a reference for comparison. Although more strict equivalence criteria were established between the measurements of freezing point osmometry and the calculations based on the Pitzer model at low concentration, air humidity osmometry is the only currently available osmometry applicable to high concentration, serves as an economic addition to standard osmometry.
Partial volume correction and image analysis methods for intersubject comparison of FDG-PET studies
NASA Astrophysics Data System (ADS)
Yang, Jun
2000-12-01
Partial volume effect is an artifact mainly due to the limited imaging sensor resolution. It creates bias in the measured activity in small structures and around tissue boundaries. In brain FDG-PET studies, especially for Alzheimer's disease study where there is serious gray matter atrophy, accurate estimate of cerebral metabolic rate of glucose is even more problematic due to large amount of partial volume effect. In this dissertation, we developed a framework enabling inter-subject comparison of partial volume corrected brain FDG-PET studies. The framework is composed of the following image processing steps: (1)MRI segmentation, (2)MR-PET registration, (3)MR based PVE correction, (4)MR 3D inter-subject elastic mapping. Through simulation studies, we showed that the newly developed partial volume correction methods, either pixel based or ROI based, performed better than previous methods. By applying this framework to a real Alzheimer's disease study, we demonstrated that the partial volume corrected glucose rates vary significantly among the control, at risk and disease patient groups and this framework is a promising tool useful for assisting early identification of Alzheimer's patients.
NASA Astrophysics Data System (ADS)
Cai, Zhijian; Zou, Wenlong; Wu, Jianhong
2017-10-01
Raman spectroscopy has been extensively used in biochemical tests, explosive detection, food additive and environmental pollutants. However, fluorescence disturbance brings a big trouble to the applications of portable Raman spectrometer. Currently, baseline correction and shifted-excitation Raman difference spectroscopy (SERDS) methods are the most prevailing fluorescence suppressing methods. In this paper, we compared the performances of baseline correction and SERDS methods, experimentally and simulatively. Through the comparison, it demonstrates that the baseline correction can get acceptable fluorescence-removed Raman spectrum if the original Raman signal has good signal-to-noise ratio, but it cannot recover the small Raman signals out of large noise background. By using SERDS method, the Raman signals, even very weak compared to fluorescence intensity and noise level, can be clearly extracted, and the fluorescence background can be completely rejected. The Raman spectrum recovered by SERDS has good signal to noise ratio. It's proved that baseline correction is more suitable for large bench-top Raman system with better quality or signal-to-noise ratio, while the SERDS method is more suitable for noisy devices, especially the portable Raman spectrometers.
ERIC Educational Resources Information Center
Kelley, Timothy P.; Hora, Judith A.
2008-01-01
This paper demonstrates why EPS comparisons across companies are meaningless. An example is provided showing how a company with a higher ROE than another company may have a lower EPS simply from having a lower book value per share (and more shares outstanding) than the comparison company. While ROE comparisons across companies can be useful,…
LCC demons with divergence term for liver MRI motion correction
NASA Astrophysics Data System (ADS)
Oh, Jihun; Martin, Diego; Skrinjar, Oskar
2010-03-01
Contrast-enhanced liver MR image sequences acquired at multiple times before and after contrast administration have been shown to be critically important for the diagnosis and monitoring of liver tumors and may be used for the quantification of liver inflammation and fibrosis. However, over multiple acquisitions, the liver moves and deforms due to patient and respiratory motion. In order to analyze contrast agent uptake one first needs to correct for liver motion. In this paper we present a method for the motion correction of dynamic contrastenhanced liver MR images. For this purpose we use a modified version of the Local Correlation Coefficient (LCC) Demons non-rigid registration method. Since the liver is nearly incompressible its displacement field has small divergence. For this reason we add a divergence term to the energy that is minimized in the LCC Demons method. We applied the method to four sequences of contrast-enhanced liver MR images. Each sequence had a pre-contrast scan and seven post-contrast scans. For each post-contrast scan we corrected for the liver motion relative to the pre-contrast scan. Quantitative evaluation showed that the proposed method improved the liver alignment relative to the non-corrected and translation-corrected scans and visual inspection showed no visible misalignment of the motion corrected contrast-enhanced scans and pre-contrast scan.
Theodorsson-Norheim, E
1986-08-01
Multiple t tests at a fixed p level are frequently used to analyse biomedical data where analysis of variance followed by multiple comparisons or the adjustment of the p values according to Bonferroni would be more appropriate. The Kruskal-Wallis test is a nonparametric 'analysis of variance' which may be used to compare several independent samples. The present program is written in an elementary subset of BASIC and will perform Kruskal-Wallis test followed by multiple comparisons between the groups on practically any computer programmable in BASIC.
Item Reliabilities for a Family of Answer-Until-Correct (AUC) Scoring Rules.
ERIC Educational Resources Information Center
Kane, Michael T.; Moloney, James M.
The Answer-Until-Correct (AUC) procedure has been proposed in order to increase the reliability of multiple-choice items. A model for examinees' behavior when they must respond to each item until they answer it correctly is presented. An expression for the reliability of AUC items, as a function of the characteristics of the item and the scoring…
Color correction with blind image restoration based on multiple images using a low-rank model
NASA Astrophysics Data System (ADS)
Li, Dong; Xie, Xudong; Lam, Kin-Man
2014-03-01
We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.
Cross, Russell; Olivieri, Laura; O'Brien, Kendall; Kellman, Peter; Xue, Hui; Hansen, Michael
2016-02-25
Traditional cine imaging for cardiac functional assessment requires breath-holding, which can be problematic in some situations. Free-breathing techniques have relied on multiple averages or real-time imaging, producing images that can be spatially and/or temporally blurred. To overcome this, methods have been developed to acquire real-time images over multiple cardiac cycles, which are subsequently motion corrected and reformatted to yield a single image series displaying one cardiac cycle with high temporal and spatial resolution. Application of these algorithms has required significant additional reconstruction time. The use of distributed computing was recently proposed as a way to improve clinical workflow with such algorithms. In this study, we have deployed a distributed computing version of motion corrected re-binning reconstruction for free-breathing evaluation of cardiac function. Twenty five patients and 25 volunteers underwent cardiovascular magnetic resonance (CMR) for evaluation of left ventricular end-systolic volume (ESV), end-diastolic volume (EDV), and end-diastolic mass. Measurements using motion corrected re-binning were compared to those using breath-held SSFP and to free-breathing SSFP with multiple averages, and were performed by two independent observers. Pearson correlation coefficients and Bland-Altman plots tested agreement across techniques. Concordance correlation coefficient and Bland-Altman analysis tested inter-observer variability. Total scan plus reconstruction times were tested for significant differences using paired t-test. Measured volumes and mass obtained by motion corrected re-binning and by averaged free-breathing SSFP compared favorably to those obtained by breath-held SSFP (r = 0.9863/0.9813 for EDV, 0.9550/0.9685 for ESV, 0.9952/0.9771 for mass). Inter-observer variability was good with concordance correlation coefficients between observers across all acquisition types suggesting substantial agreement. Both motion corrected re-binning and averaged free-breathing SSFP acquisition and reconstruction times were shorter than breath-held SSFP techniques (p < 0.0001). On average, motion corrected re-binning required 3 min less than breath-held SSFP imaging, a 37% reduction in acquisition and reconstruction time. The motion corrected re-binning image reconstruction technique provides robust cardiac imaging that can be used for quantification that compares favorably to breath-held SSFP as well as multiple average free-breathing SSFP, but can be obtained in a fraction of the time when using cloud-based distributed computing reconstruction.
ERIC Educational Resources Information Center
Llinares, Ana; Lyster, Roy
2014-01-01
This study compares the frequency and distribution of different types of corrective feedback (CF) (recasts, prompts and explicit correction) and learner uptake in 43 hours of classroom interaction at the 4th-5th grade level across three instructional settings: (1) two content and language integrated learning (CLIL) classrooms in Spain with English…
Atmospheric correction analysis on LANDSAT data over the Amazon region. [Manaus, Brazil
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Dias, L. A. V.; Dossantos, J. R.; Formaggio, A. R.
1983-01-01
The Amazon Region natural resources were studied in two ways and compared. A LANDSAT scene and its attributes were selected, and a maximum likelihood classification was made. The scene was atmospherically corrected, taking into account Amazonic peculiarities revealed by (ground truth) of the same area, and the subsequent classification. Comparison shows that the classification improves with the atmospherically corrected images.
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Liao, Liang
2013-01-01
As shown by Takahashi et al., multiple path attenuation estimates over the field of view of an airborne or spaceborne weather radar are feasible for off-nadir incidence angles. This follows from the fact that the surface reference technique, which provides path attenuation estimates, can be applied to each radar range gate that intersects the surface. This study builds on this result by showing that three of the modified Hitschfeld-Bordan estimates for the attenuation-corrected radar reflectivity factor can be generalized to the case where multiple path attenuation estimates are available, thereby providing a correction to the effects of nonuniform beamfilling. A simple simulation is presented showing some strengths and weaknesses of the approach.
2011-07-01
taken with the same camera head, operating temperature, range of calibrated blackbody illuminations, and using the same long-wavelength IR ( LWIR ) f/2...measurements shown in this article and are tabulated for comparison purposes only. Images were taken with all four devices using an f/2 LWIR lens (8–12 μm...These were acquired after a nonuniformity correction. A custom image-scaling algorithm was used to avoid the standard nonuniformity corrected scaling
A study of the luminosity function for field galaxies. [non-rich-cluster galaxies
NASA Technical Reports Server (NTRS)
Felten, J. E.
1977-01-01
Nine determinations of the luminosity function (LF) for field galaxies are analyzed and compared. Corrections for differences in Hubble constants, magnitude systems, galactic absorption functions, and definitions of the LF are necessary prior to comparison. Errors in previous comparisons are pointed out. After these corrections, eight of the nine determinations are in fairly good agreement. The discrepancy in the ninth appears to be mainly an incompleteness effect. The LF data suggest that there is little if any distinction between field galaxies and those in small groups.
NASA Astrophysics Data System (ADS)
Maguen, Ezra I.; Berlin, Michael S.; Hofbauer, John; Macy, Jonathan I.; Nesburn, Anthony B.; Papaioannou, Thanassis; Salz, James J.
1992-08-01
Sixty-two eyes underwent excimer laser photorefractive keratectomy (PRK) for the correction of myopia at Cedars-Sinai-Medical-Center. The first group of 12 patients are presented with follow up data of ten months postoperatively. The second group of 50 patients are presented with follow up data of three months postoperatively. An in-depth comparison of pre and postoperative refractive data is presented. Comparisons between pre and postoperative corrected and uncorrected Snellen visual acuities are provided in order to asses the functional visual result of the procedure.
Mazoure, Bogdan; Caraus, Iurie; Nadon, Robert; Makarenkov, Vladimir
2018-06-01
Data generated by high-throughput screening (HTS) technologies are prone to spatial bias. Traditionally, bias correction methods used in HTS assume either a simple additive or, more recently, a simple multiplicative spatial bias model. These models do not, however, always provide an accurate correction of measurements in wells located at the intersection of rows and columns affected by spatial bias. The measurements in these wells depend on the nature of interaction between the involved biases. Here, we propose two novel additive and two novel multiplicative spatial bias models accounting for different types of bias interactions. We describe a statistical procedure that allows for detecting and removing different types of additive and multiplicative spatial biases from multiwell plates. We show how this procedure can be applied by analyzing data generated by the four HTS technologies (homogeneous, microorganism, cell-based, and gene expression HTS), the three high-content screening (HCS) technologies (area, intensity, and cell-count HCS), and the only small-molecule microarray technology available in the ChemBank small-molecule screening database. The proposed methods are included in the AssayCorrector program, implemented in R, and available on CRAN.
Gender Differences in Comparisons and Entitlement: Implications for Comparable Worth.
ERIC Educational Resources Information Center
Major, Brenda
1989-01-01
Addresses the role of comparison processes in the persistence of the gender wage gap, its toleration by those disadvantaged by it, and resistance to comparable worth as a corrective strategy. Argues that gender segregation and undercompensation for women's jobs leads women to use different comparison standards when evaluating what they deserve.…
Comparison of the plenoptic sensor and the Shack-Hartmann sensor.
Ko, Jonathan; Davis, Christopher C
2017-05-01
Adaptive optics has been successfully used for decades in the field of astronomy to correct for atmospheric turbulence. A well-developed example involves sensing the slightly distorted wavefronts with a Shack-Hartmann sensor and then correcting them with a phase conjugate device. While the Shack-Hartmann sensor has proven effective for astronomical purposes, it has been less successful for use in deep turbulence conditions often found in ground-to-ground-based optical systems. We have studied an alternative way to sense and correct distorted wavefronts using a plenoptic sensor. We review the design of the plenoptic sensor and directly compare it with the well-known Shack-Hartmann sensor. An experimental comparison of the plenoptic sensor and the Shack-Hartmann sensor is performed to highlight their differences in real-world atmospheric turbulence conditions.
NASA Astrophysics Data System (ADS)
Itoh, Naoki; Kawana, Youhei; Nozawa, Satoshi; Kohyama, Yasuharu
2001-10-01
We extend the formalism for the calculation of the relativistic corrections to the Sunyaev-Zel'dovich effect for clusters of galaxies and include the multiple scattering effects in the isotropic approximation. We present the results of the calculations by the Fokker-Planck expansion method as well as by the direct numerical integration of the collision term of the Boltzmann equation. The multiple scattering contribution is found to be very small compared with the single scattering contribution. For high-temperature galaxy clusters of kBTe~15keV, the ratio of both the contributions is -0.2 per cent in the Wien region. In the Rayleigh-Jeans region the ratio is -0.03 per cent. Therefore the multiple scattering contribution is safely neglected for the observed galaxy clusters.
Training Correctional Educators: A Needs Assessment Study.
ERIC Educational Resources Information Center
Jurich, Sonia; Casper, Marta; Hull, Kim A.
2001-01-01
Focus groups and a training needs survey of Virginia correctional educators identified educational philosophy, communication skills, human behavior, and teaching techniques as topics of interest. Classroom observations identified additional areas: teacher isolation, multiple challenges, absence of grade structure, and safety constraints. (Contains…
NASA Astrophysics Data System (ADS)
Marchant, T. E.; Joshi, K. D.; Moore, C. J.
2018-03-01
Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).
The Use of Meta-Analytic Statistical Significance Testing
ERIC Educational Resources Information Center
Polanin, Joshua R.; Pigott, Terri D.
2015-01-01
Meta-analysis multiplicity, the concept of conducting multiple tests of statistical significance within one review, is an underdeveloped literature. We address this issue by considering how Type I errors can impact meta-analytic results, suggest how statistical power may be affected through the use of multiplicity corrections, and propose how…
Harrigan, George G; Harrison, Jay M
2012-01-01
New transgenic (GM) crops are subjected to extensive safety assessments that include compositional comparisons with conventional counterparts as a cornerstone of the process. The influence of germplasm, location, environment, and agronomic treatments on compositional variability is, however, often obscured in these pair-wise comparisons. Furthermore, classical statistical significance testing can often provide an incomplete and over-simplified summary of highly responsive variables such as crop composition. In order to more clearly describe the influence of the numerous sources of compositional variation we present an introduction to two alternative but complementary approaches to data analysis and interpretation. These include i) exploratory data analysis (EDA) with its emphasis on visualization and graphics-based approaches and ii) Bayesian statistical methodology that provides easily interpretable and meaningful evaluations of data in terms of probability distributions. The EDA case-studies include analyses of herbicide-tolerant GM soybean and insect-protected GM maize and soybean. Bayesian approaches are presented in an analysis of herbicide-tolerant GM soybean. Advantages of these approaches over classical frequentist significance testing include the more direct interpretation of results in terms of probabilities pertaining to quantities of interest and no confusion over the application of corrections for multiple comparisons. It is concluded that a standardized framework for these methodologies could provide specific advantages through enhanced clarity of presentation and interpretation in comparative assessments of crop composition.
Calculated X-ray Intensities Using Monte Carlo Algorithms: A Comparison to Experimental EPMA Data
NASA Technical Reports Server (NTRS)
Carpenter, P. K.
2005-01-01
Monte Carlo (MC) modeling has been used extensively to simulate electron scattering and x-ray emission from complex geometries. Here are presented comparisons between MC results and experimental electron-probe microanalysis (EPMA) measurements as well as phi(rhoz) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been widely used to develop phi(rhoz) correction algorithms. X-ray intensity data produced by MC simulations represents an independent test of both experimental and phi(rhoz) correction algorithms. The alpha-factor method has previously been used to evaluate systematic errors in the analysis of semiconductor and silicate minerals, and is used here to compare the accuracy of experimental and MC-calculated x-ray data. X-ray intensities calculated by MC are used to generate a-factors using the certificated compositions in the CuAu binary relative to pure Cu and Au standards. MC simulations are obtained using the NIST, WinCasino, and WinXray algorithms; derived x-ray intensities have a built-in atomic number correction, and are further corrected for absorption and characteristic fluorescence using the PAP phi(rhoz) correction algorithm. The Penelope code additionally simulates both characteristic and continuum x-ray fluorescence and thus requires no further correction for use in calculating alpha-factors.
Heintz, Sonja; Ruch, Willibald; Platt, Tracey; Pang, Dandan; Carretero-Dios, Hugo; Dionigi, Alberto; Argüello Gutiérrez, Catalina; Brdar, Ingrid; Brzozowska, Dorota; Chen, Hsueh-Chih; Chłopicki, Władysław; Collins, Matthew; Ďurka, Róbert; Yahfoufi, Najwa Y. El; Quiroga-Garza, Angélica; Isler, Robert B.; Mendiburo-Seguel, Andrés; Ramis, TamilSelvan; Saglam, Betül; Shcherbakova, Olga V.; Singh, Kamlesh; Stokenberga, Ieva; Wong, Peter S. O.; Torres-Marín, Jorge
2018-01-01
Recently, two forms of virtue-related humor, benevolent and corrective, have been introduced. Benevolent humor treats human weaknesses and wrongdoings benevolently, while corrective humor aims at correcting and bettering them. Twelve marker items for benevolent and corrective humor (the BenCor) were developed, and it was demonstrated that they fill the gap between humor as temperament and virtue. The present study investigates responses to the BenCor from 25 samples in 22 countries (overall N = 7,226). The psychometric properties of the BenCor were found to be sufficient in most of the samples, including internal consistency, unidimensionality, and factorial validity. Importantly, benevolent and corrective humor were clearly established as two positively related, yet distinct dimensions of virtue-related humor. Metric measurement invariance was supported across the 25 samples, and scalar invariance was supported across six age groups (from 18 to 50+ years) and across gender. Comparisons of samples within and between four countries (Malaysia, Switzerland, Turkey, and the UK) showed that the item profiles were more similar within than between countries, though some evidence for regional differences was also found. This study thus supported, for the first time, the suitability of the 12 marker items of benevolent and corrective humor in different countries, enabling a cumulative cross-cultural research and eventually applications of humor aiming at the good. PMID:29479326
ERIC Educational Resources Information Center
Leyden, Michael
1996-01-01
Describes use of a sundial to study Earth's orbit and time. Covers construction of sundial, exploration phase, introduction of concept of time as determined by the position of the sun in relation to the observer's meridian, comparison of sundial time and wristwatch time, longitudinal corrections, introduction of orbital corrections, and further…
A Correction for IUE UV Flux Distributions from Comparisons with CALSPEC
NASA Astrophysics Data System (ADS)
Bohlin, Ralph C.; Bianchi, Luciana
2018-04-01
A collection of spectral energy distributions (SEDs) is available in the Hubble Space Telescope (HST) CALSPEC database that is based on calculated model atmospheres for pure hydrogen white dwarfs (WDs). A much larger set (∼100,000) of UV SEDs covering the range (1150–3350 Å) with somewhat lower quality are available in the IUE database. IUE low-dispersion flux distributions are compared with CALSPEC to provide a correction that places IUE fluxes on the CALSPEC scale. While IUE observations are repeatable to only 4%–10% in regions of good sensitivity, the average flux corrections have a precision of 2%–3%. Our re-calibration places the IUE flux scale on the current UV reference standard and is relevant for any project based on IUE archival data, including our planned comparison of GALEX to the corrected IUE fluxes. IUE SEDs may be used to plan observations and cross-calibrate data from future missions, so the IUE flux calibration must be consistent with HST instrumental calibrations to the best possible precision.
Comparison of sEMG processing methods during whole-body vibration exercise.
Lienhard, Karin; Cabasson, Aline; Meste, Olivier; Colson, Serge S
2015-12-01
The objective was to investigate the influence of surface electromyography (sEMG) processing methods on the quantification of muscle activity during whole-body vibration (WBV) exercises. sEMG activity was recorded while the participants performed squats on the platform with and without WBV. The spikes observed in the sEMG spectrum at the vibration frequency and its harmonics were deleted using state-of-the-art methods, i.e. (1) a band-stop filter, (2) a band-pass filter, and (3) spectral linear interpolation. The same filtering methods were applied on the sEMG during the no-vibration trial. The linear interpolation method showed the highest intraclass correlation coefficients (no vibration: 0.999, WBV: 0.757-0.979) with the comparison measure (unfiltered sEMG during the no-vibration trial), followed by the band-stop filter (no vibration: 0.929-0.975, WBV: 0.661-0.938). While both methods introduced a systematic bias (P < 0.001), the error increased with increasing mean values to a higher degree for the band-stop filter. After adjusting the sEMG(RMS) during WBV for the bias, the performance of the interpolation method and the band-stop filter was comparable. The band-pass filter was in poor agreement with the other methods (ICC: 0.207-0.697), unless the sEMG(RMS) was corrected for the bias (ICC ⩾ 0.931, %LOA ⩽ 32.3). In conclusion, spectral linear interpolation or a band-stop filter centered at the vibration frequency and its multiple harmonics should be applied to delete the artifacts in the sEMG signals during WBV. With the use of a band-stop filter it is recommended to correct the sEMG(RMS) for the bias as this procedure improved its performance. Copyright © 2015 Elsevier Ltd. All rights reserved.
Vigorito, Fabio de Abreu; Dominguez, Gladys Cristina; Aidar, Luís Antônio de Arruda
2014-01-01
Objective To assess the dentoskeletal changes observed in treatment of Class II, division 1 malocclusion patients with mandibular retrognathism. Treatment was performed with the Herbst orthopedic appliance during 13 months (phase I) and pre-adjusted orthodontic fixed appliance (phase II). Methods Lateral cephalograms of 17 adolescents were taken in phase I onset (T1) and completion (T2); in the first thirteen months of phase II (T3) and in phase II completion (T4). Differences among the cephalometric variables were statistically analyzed (Bonferroni variance and multiple comparisons). Results From T1 to T4, 42% of overall maxillary growth was observed between T1 and T2 (P < 0.01), 40.3% between T2 and T3 (P < 0.05) and 17.7% between T3 and T4 (n.s.). As for overall mandibular movement, 48.2% was observed between T1 and T2 (P < 0.001) and 51.8% between T2 and T4 (P < 0.01) of which 15.1% was observed between T2 and T3 (n.s.) and 36.7% between T3 and T4 (P < 0.01). Class II molar relationship and overjet were properly corrected. The occlusal plane which rotated clockwise between T1 and T2, returned to its initial position between T2 and T3 remaining stable until T4. The mandibular plane inclination did not change at any time during treatment. Conclusion Mandibular growth was significantly greater in comparison to maxillary, allowing sagittal maxillomandibular adjustment. The dentoalveolar changes (upper molar) that overcorrected the malocclusion in phase I, partially recurred in phase II, but did not hinder correction of the malocclusion. Facial type was preserved. PMID:24713559
Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms
2007-09-01
punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data
Kan, Monica W K; Leung, Lucullus H T; Yu, Peter K N
2013-11-04
A new version of progressive resolution optimizer (PRO) with an option of air cavity correction has been implemented for RapidArc volumetric-modulated arc therapy (RA). The purpose of this study was to compare the performance of this new PRO with the use of air cavity correction option (PRO10_air) against the one without the use of the air cavity correction option (PRO10_no-air) for RapidArc planning in targets with low-density media of different sizes and complexities. The performance of PRO10_no-air and PRO10_air was initially compared using single-arc plans created for four different simple heterogeneous phantoms with virtual targets and organs at risk. Multiple-arc planning of 12 real patients having nasopharyngeal carcinomas (NPC) and ten patients having non-small cell lung cancer (NSCLC) were then performed using the above two options for further comparison. Dose calculations were performed using both the Acuros XB (AXB) algorithm with the dose to medium option and the analytical anisotropic algorithm (AAA). The effect of using intermediate dose option after the first optimization cycle in PRO10_air and PRO10_no-air was also investigated and compared. Plans were evaluated and compared using target dose coverage, critical organ sparing, conformity index, and dose homogeneity index. For NSCLC cases or cases for which large volumes of low-density media were present in or adjacent to the target volume, the use of the air cavity correction option in PRO10 was shown to be beneficial. For NPC cases or cases for which small volumes of both low- and high-density media existed in the target volume, the use of air cavity correction in PRO10 did not improve the plan quality. Based on the AXB dose calculation results, the use of PRO10_air could produce up to 18% less coverage to the bony structures of the planning target volumes for NPC cases. When the intermediate dose option in PRO10 was used, there was negligible difference observed in plan quality between optimizations with and without using the air cavity correction option.
Attenuation correction strategies for multi-energy photon emitters using SPECT
NASA Astrophysics Data System (ADS)
Pretorius, P. H.; King, M. A.; Pan, T.-S.; Hutton, B. F.
1997-06-01
The aim of this study was to investigate whether the photopeak window projections from different energy photons can be combined into a single window for reconstruction or if it is better to not combine the projections due to differences in the attenuation maps required for each photon energy. The mathematical cardiac torso (MCAT) phantom was modified to simulate the uptake of Ga-67 in the human body. Four spherical hot tumors were placed in locations which challenged attenuation correction. An analytical 3D projector with attenuation and detector response included was used to generate projection sets. Data were reconstructed using filtered backprojection (FBP) reconstruction with Butterworth filtering in conjunction with one iteration of Chang attenuation correction, and with 5 and 10 iterations of ordered-subset maximum-likelihood expectation maximization (ML-OS) reconstruction. To serve as a standard for comparison, the projection sets obtained from the two energies were first reconstructed separately using their own attenuation maps. The emission data obtained from both energies were added and reconstructed using the following attenuation strategies: 1) the 93 keV attenuation map for attenuation correction, 2) the 185 keV attenuation map for attenuation correction, 3) using a weighted mean obtained from combining the 93 keV and 185 keV maps, and 4) an ordered subset approach which combines both energies. The central count ratio (CCR) and total count ratio (TCR) were used to compare the performance of the different strategies. Compared to the standard method, results indicate an over-estimation with strategy 1, an under-estimation with strategy 2 and comparable results with strategies 3 and 4. In all strategies, the CCRs of sphere 4 (in proximity to the liver, spleen and backbone) were under-estimated, although TCRs were comparable to that of the other locations. The weighted mean and ordered subset strategies for attenuation correction were of comparable accuracy to reconstruction of the windows separately. They are recommended for multi-energy photon SPECT imaging quantitation when there is a need to combine the acquisitions of multiple windows.
Using Comparison of Multiple Strategies in the Mathematics Classroom: Lessons Learned and Next Steps
ERIC Educational Resources Information Center
Durkin, Kelley; Star, Jon R.; Rittle-Johnson, Bethany
2017-01-01
Comparison is a fundamental cognitive process that can support learning in a variety of domains, including mathematics. The current paper aims to summarize empirical findings that support recommendations on using comparison of multiple strategies in mathematics classrooms. We report the results of our classroom-based research on using comparison…
Comparison of Modal to Nodal Approaches for Wavefront Correction,
1986-02-01
the influence function of the wavefront corrector. (Implicit here is the assumption that the influence function is the same for every node, which is...To implement a nodal correction, the wavefront to be corrected is -. .. decomposed using a basis which is determined by the nodal (actuator) influence ... function of the wavefront corrector. This decomposition results in a set of coefficients which correspond to the drive signal required at the
Double air-fuel ratio sensor system having double-skip function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katsuno, T.
1988-01-26
A method for controlling the air-fuel ratio in an internal combustion engine is described having a catalyst converter for removing pollutants in the exhaust gas thereof, and upstream-side and downstream-side air-fuel ratio sensors disposed upstream and downstream, respectively, of the catalyst converter for detecting a concentration of a specific component in an exhaust gas, comprising the steps of: comparing the output of the upstream-side air-fuel ratio sensor with a first predetermined value; gradually changing a first air-fuel ratio correction amount in accordance with a result of the comparison of the output of the upstream-side air-fuel ratio sensor with the predeterminedmore » value; shifting the first air-fuel ratio correction amount by a first skip amount during a predetermined time period after the result of the comparison of the upstream-side air-fuel ratio sensor is changed; shifting the first air-fuel ratio correction amount by a second skip amount smaller than the first skip amount after the predetermined time period has passed; comparing the output of the downstream-side air-fuel ratio with a second predetermined value, calculating a second air-fuel ratio correction amount in accordance with the comparison result of the output of the downstream-side air-fuel ratio sensor with the second predetermined value; and adjusting the actual air-fuel ratio in accordance with the first and second air-fuel ratio correction amounts; wherein the gradually-changing step comprises the steps of: gradually decreasing the first air-fuel ratio correction amount when the output of the upstream-side air-fuel sensor is on the rich side with respect to the first predetermined value; and gradually increasing the first air-fuel ratio correction amount when the output of the upstream-side air-fuel sensor is on the lean side with respect to the first predetermined value.« less
Radiosondes Corrected for Inaccuracy in RH Measurements
Miloshevich, Larry
2008-01-15
Corrections for inaccuracy in Vaisala radiosonde RH measurements have been applied to ARM SGP radiosonde soundings. The magnitude of the corrections can vary considerably between soundings. The radiosonde measurement accuracy, and therefore the correction magnitude, is a function of atmospheric conditions, mainly T, RH, and dRH/dt (humidity gradient). The corrections are also very sensitive to the RH sensor type, and there are 3 Vaisala sensor types represented in this dataset (RS80-H, RS90, and RS92). Depending on the sensor type and the radiosonde production date, one or more of the following three corrections were applied to the RH data: Temperature-Dependence correction (TD), Contamination-Dry Bias correction (C), Time Lag correction (TL). The estimated absolute accuracy of NIGHTTIME corrected and uncorrected Vaisala RH measurements, as determined by comparison to simultaneous reference-quality measurements from Holger Voemel's (CU/CIRES) cryogenic frostpoint hygrometer (CFH), is given by Miloshevich et al. (2006).
Zhao, Ni; Chen, Jun; Carroll, Ian M.; Ringel-Kulka, Tamar; Epstein, Michael P.; Zhou, Hua; Zhou, Jin J.; Ringel, Yehuda; Li, Hongzhe; Wu, Michael C.
2015-01-01
High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Distance-based analysis is a popular strategy for evaluating the overall association between microbiome diversity and outcome, wherein the phylogenetic distance between individuals’ microbiome profiles is computed and tested for association via permutation. Despite their practical popularity, distance-based approaches suffer from important challenges, especially in selecting the best distance and extending the methods to alternative outcomes, such as survival outcomes. We propose the microbiome regression-based kernel association test (MiRKAT), which directly regresses the outcome on the microbiome profiles via the semi-parametric kernel machine regression framework. MiRKAT allows for easy covariate adjustment and extension to alternative outcomes while non-parametrically modeling the microbiome through a kernel that incorporates phylogenetic distance. It uses a variance-component score statistic to test for the association with analytical p value calculation. The model also allows simultaneous examination of multiple distances, alleviating the problem of choosing the best distance. Our simulations demonstrated that MiRKAT provides correctly controlled type I error and adequate power in detecting overall association. “Optimal” MiRKAT, which considers multiple candidate distances, is robust in that it suffers from little power loss in comparison to when the best distance is used and can achieve tremendous power gain in comparison to when a poor distance is chosen. Finally, we applied MiRKAT to real microbiome datasets to show that microbial communities are associated with smoking and with fecal protease levels after confounders are controlled for. PMID:25957468
Community Air Sensor Network (CAIRSENSE) project ...
Advances in air pollution sensor technology have enabled the development of small and low cost systems to measure outdoor air pollution. The deployment of a large number of sensors across a small geographic area would have potential benefits to supplement traditional monitoring networks with additional geographic and temporal measurement resolution, if the data quality were sufficient. To understand the capability of emerging air sensor technology, the Community Air Sensor Network (CAIRSENSE) project deployed low cost, continuous and commercially-available air pollution sensors at a regulatory air monitoring site and as a local sensor network over a surrounding ~2 km area in Southeastern U.S. Co-location of sensors measuring oxides of nitrogen, ozone, carbon monoxide, sulfur dioxide, and particles revealed highly variable performance, both in terms of comparison to a reference monitor as well as whether multiple identical sensors reproduced the same signal. Multiple ozone, nitrogen dioxide, and carbon monoxide sensors revealed low to very high correlation with a reference monitor, with Pearson sample correlation coefficient (r) ranging from 0.39 to 0.97, -0.25 to 0.76, -0.40 to 0.82, respectively. The only sulfur dioxide sensor tested revealed no correlation (r 0.5), step-wise multiple linear regression was performed to determine if ambient temperature, relative humidity (RH), or age of the sensor in sampling days could be used in a correction algorihm to im
García-Sanz, Ramón; Corchete, Luis Antonio; Alcoceba, Miguel; Chillon, María Carmen; Jiménez, Cristina; Prieto, Isabel; García-Álvarez, María; Puig, Noemi; Rapado, Immaculada; Barrio, Santiago; Oriol, Albert; Blanchard, María Jesús; de la Rubia, Javier; Martínez, Rafael; Lahuerta, Juan José; González Díaz, Marcos; Mateos, María Victoria; San Miguel, Jesús Fernando; Martínez-López, Joaquín; Sarasquete, María Eugenia
2017-12-01
Bortezomib- and thalidomide-based therapies have significantly contributed to improved survival of multiple myeloma (MM) patients. However, treatment-induced peripheral neuropathy (TiPN) is a common adverse event associated with them. Risk factors for TiPN in MM patients include advanced age, prior neuropathy, and other drugs, but there are conflicting results about the role of genetics in predicting the risk of TiPN. Thus, we carried out a genome-wide association study based on more than 300 000 exome single nucleotide polymorphisms in 172 MM patients receiving therapy involving bortezomib and thalidomide. We compared patients developing and not developing TiPN under similar treatment conditions (GEM05MAS65, NCT00443235). The highest-ranking single nucleotide polymorphism was rs45443101, located in the PLCG2 gene, but no significant differences were found after multiple comparison correction (adjusted P = .1708). Prediction analyses, cytoband enrichment, and pathway analyses were also performed, but none yielded any significant findings. A copy number approach was also explored, but this gave no significant results either. In summary, our study did not find a consistent genetic component associated with TiPN under bortezomib and thalidomide therapies that could be used for prediction, which makes clinical judgment essential in the practical management of MM treatment. Copyright © 2016 John Wiley & Sons, Ltd.
Limitations of quantitative analysis of deep crustal seismic reflection data: Examples from GLIMPCE
Lee, Myung W.; Hutchinson, Deborah R.
1992-01-01
Amplitude preservation in seismic reflection data can be obtained by a relative true amplitude (RTA) processing technique in which the relative strength of reflection amplitudes is preserved vertically as well as horizontally, after compensating for amplitude distortion by near-surface effects and propagation effects. Quantitative analysis of relative true amplitudes of the Great Lakes International Multidisciplinary Program on Crustal Evolution seismic data is hampered by large uncertainties in estimates of the water bottom reflection coefficient and the vertical amplitude correction and by inadequate noise suppression. Processing techniques such as deconvolution, F-K filtering, and migration significantly change the overall shape of amplitude curves and hence calculation of reflection coefficients and average reflectance. Thus lithological interpretation of deep crustal seismic data based on the absolute value of estimated reflection strength alone is meaningless. The relative strength of individual events, however, is preserved on curves generated at different stages in the processing. We suggest that qualitative comparisons of relative strength, if used carefully, provide a meaningful measure of variations in reflectivity. Simple theoretical models indicate that peg-leg multiples rather than water bottom multiples are the most severe source of noise contamination. These multiples are extremely difficult to remove when the water bottom reflection coefficient is large (>0.6), a condition that exists beneath parts of Lake Superior and most of Lake Huron.
Response inhibition in motor conversion disorder.
Voon, Valerie; Ekanayake, Vindhya; Wiggs, Edythe; Kranick, Sarah; Ameli, Rezvan; Harrison, Neil A; Hallett, Mark
2013-05-01
Conversion disorders (CDs) are unexplained neurological symptoms presumed to be related to a psychological issue. Studies focusing on conversion paralysis have suggested potential impairments in motor initiation or execution. Here we studied CD patients with aberrant or excessive motor movements and focused on motor response inhibition. We also assessed cognitive measures in multiple domains. We compared 30 CD patients and 30 age-, sex-, and education-matched healthy volunteers on a motor response inhibition task (go/no go), along with verbal motor response inhibition (color-word interference) and measures of attention, sustained attention, processing speed, language, memory, visuospatial processing, and executive function including planning and verbal fluency. CD patients had greater impairments in commission errors on the go/no go task (P < .001) compared with healthy volunteers, which remained significant after Bonferroni correction for multiple comparisons and after controlling for attention, sustained attention, depression, and anxiety. There were no significant differences in other cognitive measures. We highlight a specific deficit in motor response inhibition that may play a role in impaired inhibition of unwanted movement such as the excessive and aberrant movements seen in motor conversion. Patients with nonepileptic seizures, a different form of conversion disorder, are commonly reported to have lower IQ and multiple cognitive deficits. Our results point toward potential differences between conversion disorder subgroups. © 2013 Movement Disorder Society. Copyright © 2013 Movement Disorder Society.
Heme Oxygenase-1 and 2 Common Genetic Variants and Risk for Multiple Sclerosis
Agúndez, José A. G.; García-Martín, Elena; Martínez, Carmen; Benito-León, Julián; Millán-Pascual, Jorge; Díaz-Sánchez, María; Calleja, Patricia; Pisa, Diana; Turpín-Fenoll , Laura; Alonso-Navarro, Hortensia; Pastor, Pau; Ortega-Cubero, Sara; Ayuso-Peralta, Lucía; Torrecillas, Dolores; García-Albea, Esteban; Plaza-Nieto, José Francisco; Jiménez-Jiménez, Félix Javier
2016-01-01
Several neurochemical, neuropathological, and experimental data suggest a possible role of oxidative stress in the ethiopathogenesis of multiple sclerosis(MS). Heme-oxygenases(HMOX) are an important defensive mechanism against oxidative stress, and HMOX1 is overexpressed in the brain and spinal cord of MS patients and in experimental autoimmune encephalomyelitis(EAE). We analyzed whether common polymorphisms affecting the HMOX1 and HMOX2 genes are related with the risk to develop MS. We analyzed the distribution of genotypes and allelic frequencies of the HMOX1 rs2071746, HMOX1 rs2071747, HMOX2 rs2270363, and HMOX2 rs1051308 SNPs, as well as the presence of Copy number variations(CNVs) of these genes in 292 subjects MS and 533 healthy controls, using TaqMan assays. The frequencies of HMOX2 rs1051308AA genotype and HMOX2 rs1051308A and HMOX1 rs2071746A alleles were higher in MS patients than in controls, although only that of the SNP HMOX2 rs1051308 in men remained as significant after correction for multiple comparisons. None of the studied polymorphisms was related to the age at disease onset or with the MS phenotype. The present study suggests a weak association between HMOX2 rs1051308 polymorphism and the risk to develop MS in Spanish Caucasian men and a trend towards association between the HMOX1 rs2071746A and MS risk. PMID:26868429
Heme Oxygenase-1 and 2 Common Genetic Variants and Risk for Multiple Sclerosis.
Agúndez, José A G; García-Martín, Elena; Martínez, Carmen; Benito-León, Julián; Millán-Pascual, Jorge; Díaz-Sánchez, María; Calleja, Patricia; Pisa, Diana; Turpín-Fenoll, Laura; Alonso-Navarro, Hortensia; Pastor, Pau; Ortega-Cubero, Sara; Ayuso-Peralta, Lucía; Torrecillas, Dolores; García-Albea, Esteban; Plaza-Nieto, José Francisco; Jiménez-Jiménez, Félix Javier
2016-02-12
Several neurochemical, neuropathological, and experimental data suggest a possible role of oxidative stress in the ethiopathogenesis of multiple sclerosis(MS). Heme-oxygenases(HMOX) are an important defensive mechanism against oxidative stress, and HMOX1 is overexpressed in the brain and spinal cord of MS patients and in experimental autoimmune encephalomyelitis(EAE). We analyzed whether common polymorphisms affecting the HMOX1 and HMOX2 genes are related with the risk to develop MS. We analyzed the distribution of genotypes and allelic frequencies of the HMOX1 rs2071746, HMOX1 rs2071747, HMOX2 rs2270363, and HMOX2 rs1051308 SNPs, as well as the presence of Copy number variations(CNVs) of these genes in 292 subjects MS and 533 healthy controls, using TaqMan assays. The frequencies of HMOX2 rs1051308AA genotype and HMOX2 rs1051308A and HMOX1 rs2071746A alleles were higher in MS patients than in controls, although only that of the SNP HMOX2 rs1051308 in men remained as significant after correction for multiple comparisons. None of the studied polymorphisms was related to the age at disease onset or with the MS phenotype. The present study suggests a weak association between HMOX2 rs1051308 polymorphism and the risk to develop MS in Spanish Caucasian men and a trend towards association between the HMOX1 rs2071746A and MS risk.
Inflammatory cytokines in major depressive disorder: A case-control study.
Cassano, Paolo; Bui, Eric; Rogers, Andrew H; Walton, Zandra E; Ross, Rachel; Zeng, Mary; Nadal-Vicens, Mireya; Mischoulon, David; Baker, Amanda W; Keshaviah, Aparna; Worthington, John; Hoge, Elizabeth A; Alpert, Jonathan; Fava, Maurizio; Wong, Kwok K; Simon, Naomi M
2017-01-01
There is mixed evidence in the literature on the role of inflammation in major depressive disorder. Contradictory findings are attributed to lack of rigorous characterization of study subjects, to the presence of concomitant medical illnesses, to the small sample sizes, and to the limited number of cytokines tested. Subjects aged 18-70 years, diagnosed with major depressive disorder and presenting with chronic course of illness, as well as matched controls ( n = 236), were evaluated by trained raters and provided blood for cytokine measurements. Cytokine levels in EDTA plasma were measured with the MILLIPLEX Multi-Analyte Profiling Human Cytokine/Chemokine Assay employing Luminex technology. The Wilcoxon rank-sum test was used to compare cytokine levels between major depressive disorder subjects and healthy volunteers, before (interleukin [IL]-1β, IL-6, and tumor necrosis factor-α) and after Bonferroni correction for multiple comparisons (IL-1α, IL-2, IL-3, IL-4, IL-5, IL-7, IL-8, IL-10, IL-12(p40), IL-12(p70), IL-13, IL-15, IFN-γ-inducible protein 10, Eotaxin, interferon-γ, monotype chemoattractant protein-1, macrophage inflammatory protein-1α, granulocyte-macrophage colony-stimulating factor and vascular endothelial growth factor). There were no significant differences in cytokine levels between major depressive disorder subjects and controls, both prior to and after correction for multiple analyses (significance set at p ⩽ 0.05 and p ⩽ 0.002, respectively). Our well-characterized examination of cytokine plasma levels did not support the association of major depressive disorder with systemic inflammation. The heterogeneity of major depressive disorder, as well as a potential sampling bias selecting for non-inflammatory depression, might have determined our findings discordant with the literature.
Inflammatory cytokines in major depressive disorder: A case–control study
Cassano, Paolo; Bui, Eric; Rogers, Andrew H; Walton, Zandra E; Ross, Rachel; Zeng, Mary; Nadal-Vicens, Mireya; Mischoulon, David; Baker, Amanda W; Keshaviah, Aparna; Worthington, John; Hoge, Elizabeth A; Alpert, Jonathan; Fava, Maurizio; Wong, Kwok K; Simon, Naomi M
2017-01-01
Introduction There is mixed evidence in the literature on the role of inflammation in major depressive disorder. Contradictory findings are attributed to lack of rigorous characterization of study subjects, to the presence of concomitant medical illnesses, to the small sample sizes, and to the limited number of cytokines tested. Methods Subjects aged 18–70 years, diagnosed with major depressive disorder and presenting with chronic course of illness, as well as matched controls (n = 236), were evaluated by trained raters and provided blood for cytokine measurements. Cytokine levels in EDTA plasma were measured with the MILLIPLEX Multi-Analyte Profiling Human Cytokine/Chemokine Assay employing Luminex technology. The Wilcoxon rank-sum test was used to compare cytokine levels between major depressive disorder subjects and healthy volunteers, before (interleukin [IL]-1 β, IL-6, and tumor necrosis factor-α) and after Bonferroni correction for multiple comparisons (IL-1α, IL-2, IL-3, IL-4, IL-5, IL-7, IL-8, IL-10, IL-12(p40), IL-12(p70), IL-13, IL-15, IFN-γ-inducible protein 10, Eotaxin, interferon-γ, monotype chemoattractant protein-1, macrophage inflammatory protein-1α, granulocyte-macrophage colony-stimulating factor and vascular endothelial growth factor). Results There were no significant differences in cytokine levels between major depressive disorder subjects and controls, both prior to and after correction for multiple analyses (significance set at p ≤ 0.05 and p ≤ 0.002, respectively). Conclusion Our well-characterized examination of cytokine plasma levels did not support the association of major depressive disorder with systemic inflammation. The heterogeneity of major depressive disorder, as well as a potential sampling bias selecting for non-inflammatory depression, might have determined our findings discordant with the literature. PMID:27313138
Pallmann, Philip; Schaarschmidt, Frank; Hothorn, Ludwig A; Fischer, Christiane; Nacke, Heiko; Priesnitz, Kai U; Schork, Nicholas J
2012-11-01
Comparing diversities between groups is a task biologists are frequently faced with, for example in ecological field trials or when dealing with metagenomics data. However, researchers often waver about which measure of diversity to choose as there is a multitude of approaches available. As Jost (2008, Molecular Ecology, 17, 4015) has pointed out, widely used measures such as the Shannon or Simpson index have undesirable properties which make them hard to compare and interpret. Many of the problems associated with the use of these 'raw' indices can be corrected by transforming them into 'true' diversity measures. We introduce a technique that allows the comparison of two or more groups of observations and simultaneously tests a user-defined selection of a number of 'true' diversity measures. This procedure yields multiplicity-adjusted P-values according to the method of Westfall and Young (1993, Resampling-Based Multiple Testing: Examples and Methods for p-Value Adjustment, 49, 941), which ensures that the rate of false positives (type I error) does not rise when the number of groups and/or diversity indices is extended. Software is available in the R package 'simboot'. © 2012 Blackwell Publishing Ltd.
Guideline validation in multiple trauma care through business process modeling.
Stausberg, Jürgen; Bilir, Hüseyin; Waydhas, Christian; Ruchholtz, Steffen
2003-07-01
Clinical guidelines can improve the quality of care in multiple trauma. In our Department of Trauma Surgery a specific guideline is available paper-based as a set of flowcharts. This format is appropriate for the use by experienced physicians but insufficient for electronic support of learning, workflow and process optimization. A formal and logically consistent version represented with a standardized meta-model is necessary for automatic processing. In our project we transferred the paper-based into an electronic format and analyzed the structure with respect to formal errors. Several errors were detected in seven error categories. The errors were corrected to reach a formally and logically consistent process model. In a second step the clinical content of the guideline was revised interactively using a process-modeling tool. Our study reveals that guideline development should be assisted by process modeling tools, which check the content in comparison to a meta-model. The meta-model itself could support the domain experts in formulating their knowledge systematically. To assure sustainability of guideline development a representation independent of specific applications or specific provider is necessary. Then, clinical guidelines could be used for eLearning, process optimization and workflow management additionally.
A Mis-recognized Medical Vocabulary Correction System for Speech-based Electronic Medical Record
Seo, Hwa Jeong; Kim, Ju Han; Sakabe, Nagamasa
2002-01-01
Speech recognition as an input tool for electronic medical record (EMR) enables efficient data entry at the point of care. However, the recognition accuracy for medical vocabulary is much poorer than that for doctor-patient dialogue. We developed a mis-recognized medical vocabulary correction system based on syllable-by-syllable comparison of speech text against medical vocabulary database. Using specialty medical vocabulary, the algorithm detects and corrects mis-recognized medical vocabularies in narrative text. Our preliminary evaluation showed 94% of accuracy in mis-recognized medical vocabulary correction.
Corrective response times in a coordinated eye-head-arm countermanding task.
Tao, Gordon; Khan, Aarlenne Z; Blohm, Gunnar
2018-06-01
Inhibition of motor responses has been described as a race between two competing decision processes of motor initiation and inhibition, which manifest as the reaction time (RT) and the stop signal reaction time (SSRT); in the case where motor initiation wins out over inhibition, an erroneous movement occurs that usually needs to be corrected, leading to corrective response times (CRTs). Here we used a combined eye-head-arm movement countermanding task to investigate the mechanisms governing multiple effector coordination and the timing of corrective responses. We found a high degree of correlation between effector response times for RT, SSRT, and CRT, suggesting that decision processes are strongly dependent across effectors. To gain further insight into the mechanisms underlying CRTs, we tested multiple models to describe the distribution of RTs, SSRTs, and CRTs. The best-ranked model (according to 3 information criteria) extends the LATER race model governing RTs and SSRTs, whereby a second motor initiation process triggers the corrective response (CRT) only after the inhibition process completes in an expedited fashion. Our model suggests that the neural processing underpinning a failed decision has a residual effect on subsequent actions. NEW & NOTEWORTHY Failure to inhibit erroneous movements typically results in corrective movements. For coordinated eye-head-hand movements we show that corrective movements are only initiated after the erroneous movement cancellation signal has reached a decision threshold in an accelerated fashion.
NASA Astrophysics Data System (ADS)
Mehrotra, Rajeshwar; Sharma, Ashish
2012-12-01
The quality of the absolute estimates of general circulation models (GCMs) calls into question the direct use of GCM outputs for climate change impact assessment studies, particularly at regional scales. Statistical correction of GCM output is often necessary when significant systematic biasesoccur between the modeled output and observations. A common procedure is to correct the GCM output by removing the systematic biases in low-order moments relative to observations or to reanalysis data at daily, monthly, or seasonal timescales. In this paper, we present an extension of a recently published nested bias correction (NBC) technique to correct for the low- as well as higher-order moments biases in the GCM-derived variables across selected multiple time-scales. The proposed recursive nested bias correction (RNBC) approach offers an improved basis for applying bias correction at multiple timescales over the original NBC procedure. The method ensures that the bias-corrected series exhibits improvements that are consistently spread over all of the timescales considered. Different variations of the approach starting from the standard NBC to the more complex recursive alternatives are tested to assess their impacts on a range of GCM-simulated atmospheric variables of interest in downscaling applications related to hydrology and water resources. Results of the study suggest that three to five iteration RNBCs are the most effective in removing distributional and persistence related biases across the timescales considered.
Rajender, Singh; Carlus, Silas Justin; Bansal, Sandeep Kumar; Negi, Mahendra Pal Singh; Negi, Mahendra Pratap Singh; Sadasivam, Nirmala; Sadasivam, Muthusamy Narayanan; Thangaraj, Kumarasamy
2013-01-01
Polycystic ovarian syndrome (PCOS) refers to an inheritable androgen excess disorder characterized by multiple small follicles located at the ovarian periphery. Hyperandrogenism in PCOS, and inverse correlation between androgen receptor (AR) CAG numbers and AR function, led us to hypothesize that CAG length variations may affect PCOS risk. CAG repeat region of 169 patients recruited following strictly defined Rotterdam (2003) inclusion criteria and that of 175 ethnically similar control samples, were analyzed. We also conducted a meta-analysis on the data taken from published studies, to generate a pooled estimate on 2194 cases and 2242 controls. CAG bi-allelic mean length was between 8.5 and 24.5 (mean = 17.43, SD = 2.43) repeats in the controls and between 11 and 24 (mean = 17.39, SD = 2.29) repeats in the cases, without any significant difference between the two groups. Further, comparison of bi-allelic mean and its frequency distribution in three categories (short, moderate and long alleles) did not show any significant difference between controls and various case subgroups. Frequency distribution of bi-allelic mean in two categories (extreme and moderate alleles) showed over-representation of extreme sized alleles in the cases with marginally significant value (50.3% vs. 61.5%, χ(2) = 4.41; P = 0.036), which turned insignificant upon applying Bonferroni correction for multiple comparisons. X-chromosome inactivation analysis showed no significant difference in the inactivation pattern of CAG alleles or in the comparison of weighed bi-allelic mean between cases and controls. Meta-analysis also showed no significant correlation between CAG length and PCOS risk, except a minor over-representation of short CAG alleles in the cases. CAG bi-allelic mean length did not differ between controls and cases/case sub-groups nor did the allele distribution. Over-representation of short/extreme-sized alleles in the cases may be a chance finding without any true association with PCOS risk.
Multiple Choice Items: How to Gain the Most out of Them.
ERIC Educational Resources Information Center
Talmir, Pinchas
1991-01-01
Describes how multiple-choice items can be designed and used as an effective diagnostic tool by avoiding their pitfalls and by taking advantage of their potential benefits. The following issues are discussed: correct' versus best answers; construction of diagnostic multiple-choice items; the problem of guessing; the use of justifications of…
NASA Astrophysics Data System (ADS)
Li, Yan; Li, Lin; Huang, Yi-Fan; Du, Bao-Lin
2009-07-01
This paper analyses the dynamic residual aberrations of a conformal optical system and introduces adaptive optics (AO) correction technology to this system. The image sharpening AO system is chosen as the correction scheme. Communication between MATLAB and Code V is established via ActiveX technique in computer simulation. The SPGD algorithm is operated at seven zoom positions to calculate the optimized surface shape of the deformable mirror. After comparison of performance of the corrected system with the baseline system, AO technology is proved to be a good way of correcting the dynamic residual aberration in conformal optical design.
Comparisons of Reflectivities from the TRMM Precipitation Radar and Ground-Based Radars
NASA Technical Reports Server (NTRS)
Wang, Jianxin; Wolff, David B.
2008-01-01
Given the decade long and highly successful Tropical Rainfall Measuring Mission (TRMM), it is now possible to provide quantitative comparisons between ground-based radars (GRs) with the space-borne TRMM precipitation radar (PR) with greater certainty over longer time scales in various tropical climatological regions. This study develops an automated methodology to match and compare simultaneous TRMM PR and GR reflectivities at four primary TRMM Ground Validation (GV) sites: Houston, Texas (HSTN); Melbourne, Florida (MELB); Kwajalein, Republic of the Marshall Islands (KWAJ); and Darwin, Australia (DARW). Data from each instrument are resampled into a three-dimensional Cartesian coordinate system. The horizontal displacement during the PR data resampling is corrected. Comparisons suggest that the PR suffers significant attenuation at lower levels especially in convective rain. The attenuation correction performs quite well for convective rain but appears to slightly over-correct in stratiform rain. The PR and GR observations at HSTN, MELB and KWAJ agree to about 1 dB on average with a few exceptions, while the GR at DARW requires +1 to -5 dB calibration corrections. One of the important findings of this study is that the GR calibration offset is dependent on the reflectivity magnitude. Hence, we propose that the calibration should be carried out using a regression correction, rather than simply adding an offset value to all GR reflectivities. This methodology is developed towards TRMM GV efforts to improve the accuracy of tropical rain estimates, and can also be applied to the proposed Global Precipitation Measurement and other related activities over the globe.
NASA Astrophysics Data System (ADS)
Toscano, Marguerite A.
2016-06-01
Sample elevations corrected for tectonic uplift and assessed relative to local modeled sea levels provide a new perspective on paleoenvironmental history at Cobbler's Reef, Barbados. Previously, 14C-dated surface samples of fragmented Acropora palmata plotted above paleo sea level based on their present (uplifted) elevations, suggesting supratidal rubble deposited during a period of extreme storms (4500-3000 cal BP), precipitating reef demise. At several sites, however, A. palmata persisted, existing until ~370 cal BP. Uplift-corrected A. palmata sample elevations lie below the western Atlantic sea-level curve, and ~2 m below ICE-6G-modeled paleo sea level, under slow rates of sea-level rise, negating the possibility that Cobbler's Reef is a supratidal storm ridge. Most sites show limited age ranges from corals likely damaged/killed on the reef crest, not the mixed ages of rubble ridges, strongly suggesting the reef framework died off in stages over 6500 yr. Reef crest death assemblages invoke multiple paleohistoric causes, from ubiquitous hurricanes to anthropogenic impacts. Comparison of death assemblage ages to dated regional paleotempestological sequences, proxy-based paleotemperatures, recorded hurricanes, tsunamis, European settlement, deforestation, and resulting turbidity, reveals many possible factors inimical to the survival of A. palmata along Cobbler's Reef.
Harmonization of multi-site diffusion tensor imaging data.
Fortin, Jean-Philippe; Parker, Drew; Tunç, Birkan; Watanabe, Takanori; Elliott, Mark A; Ruparel, Kosha; Roalf, David R; Satterthwaite, Theodore D; Gur, Ruben C; Gur, Raquel E; Schultz, Robert T; Verma, Ragini; Shinohara, Russell T
2017-11-01
Diffusion tensor imaging (DTI) is a well-established magnetic resonance imaging (MRI) technique used for studying microstructural changes in the white matter. As with many other imaging modalities, DTI images suffer from technical between-scanner variation that hinders comparisons of images across imaging sites, scanners and over time. Using fractional anisotropy (FA) and mean diffusivity (MD) maps of 205 healthy participants acquired on two different scanners, we show that the DTI measurements are highly site-specific, highlighting the need of correcting for site effects before performing downstream statistical analyses. We first show evidence that combining DTI data from multiple sites, without harmonization, may be counter-productive and negatively impacts the inference. Then, we propose and compare several harmonization approaches for DTI data, and show that ComBat, a popular batch-effect correction tool used in genomics, performs best at modeling and removing the unwanted inter-site variability in FA and MD maps. Using age as a biological phenotype of interest, we show that ComBat both preserves biological variability and removes the unwanted variation introduced by site. Finally, we assess the different harmonization methods in the presence of different levels of confounding between site and age, in addition to test robustness to small sample size studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Verification of road databases using multiple road models
NASA Astrophysics Data System (ADS)
Ziems, Marcel; Rottensteiner, Franz; Heipke, Christian
2017-08-01
In this paper a new approach for automatic road database verification based on remote sensing images is presented. In contrast to existing methods, the applicability of the new approach is not restricted to specific road types, context areas or geographic regions. This is achieved by combining several state-of-the-art road detection and road verification approaches that work well under different circumstances. Each one serves as an independent module representing a unique road model and a specific processing strategy. All modules provide independent solutions for the verification problem of each road object stored in the database in form of two probability distributions, the first one for the state of a database object (correct or incorrect), and a second one for the state of the underlying road model (applicable or not applicable). In accordance with the Dempster-Shafer Theory, both distributions are mapped to a new state space comprising the classes correct, incorrect and unknown. Statistical reasoning is applied to obtain the optimal state of a road object. A comparison with state-of-the-art road detection approaches using benchmark datasets shows that in general the proposed approach provides results with larger completeness. Additional experiments reveal that based on the proposed method a highly reliable semi-automatic approach for road data base verification can be designed.
[Small infundibulectomy versus ventriculotomy in tetralogy of Fallot].
Bojórquez-Ramos, Julio César
2013-01-01
the surgical correction of tetralogy of Fallot (TOF) is standardized on the way to close the septal defect, but differs in the way of expanding the right ventricular outflow tract (RVOT). The aim was to compare the early postoperative clinical course of the RVOT obstruction enlargement in classical ventriculotomy technique and the small infundibulectomy (SI). We analyzed the database of the pediatric heart surgery service from 2008 to 2011. Patients with non-complex TOF undergoing complete correction by classical ventriculotomy or SI were selected. Anova, χ(2) and Fisher statistical test were applied. the data included 47 patients, 55 % (26) male, mean age 43 months (6-172), classical ventriculotomy was performed in 61.7 % (29). This group had higher peak levels of lactate (9.07 versus 6.8 mmol/L) p = 0049, and greater magnitude in the index bleeding/kg in the first 12 hours (39.1 versus 20.3 mL/kg) p = 0.016. Death occurred in 9 cases (31.03 %) versus one (5.6 %) in the SI group with p = 0.037; complications exclusive as acute renal failure, hemopneumothorax, pneumonia, permanent AV-block and multiple organ failure were observed. morbidity and mortality was higher in classical ventriculotomy group in comparison with SI. This is possibly associated with higher blood volume.
Brain correlates of the intrinsic subjective cost of effort in sedentary volunteers.
Bernacer, J; Martinez-Valbuena, I; Martinez, M; Pujol, N; Luis, E; Ramirez-Castillo, D; Pastor, M A
2016-01-01
One key aspect of motivation is the ability of agents to overcome excessive weighting of intrinsic subjective costs. This contribution aims to analyze the subjective cost of effort and assess its neural correlates in sedentary volunteers. We recruited a sample of 57 subjects who underwent a decision-making task using a prospective, moderate, and sustained physical effort as devaluating factor. Effort discounting followed a hyperbolic function, and individual discounting constants correlated with an indicator of sedentary lifestyle (global physical activity questionnaire; R=-0.302, P=0.033). A subsample of 24 sedentary volunteers received a functional magnetic resonance imaging scan while performing a similar effort-discounting task. BOLD signal of a cluster located in the dorsomedial prefrontal cortex correlated with the subjective value of the pair of options under consideration (Z>2.3, P<0.05; cluster corrected for multiple comparisons for the whole brain). Furthermore, effort-related discounting of reward correlated with the signal of a cluster in the ventrolateral prefrontal cortex (Z>2.3, P<0.05; small volume cluster corrected for a region of interest including the ventral prefrontal cortex and striatum). This study offers empirical data about the intrinsic subjective cost of effort and its neural correlates in sedentary individuals. © 2016 Elsevier B.V. All rights reserved.
Lewis, Ashley Glen; Schriefers, Herbert; Bastiaansen, Marcel; Schoffelen, Jan-Mathijs
2018-05-21
Reinstatement of memory-related neural activity measured with high temporal precision potentially provides a useful index for real-time monitoring of the timing of activation of memory content during cognitive processing. The utility of such an index extends to any situation where one is interested in the (relative) timing of activation of different sources of information in memory, a paradigm case of which is tracking lexical activation during language processing. Essential for this approach is that memory reinstatement effects are robust, so that their absence (in the average) definitively indicates that no lexical activation is present. We used electroencephalography to test the robustness of a reported subsequent memory finding involving reinstatement of frequency-specific entrained oscillatory brain activity during subsequent recognition. Participants learned lists of words presented on a background flickering at either 6 or 15 Hz to entrain a steady-state brain response. Target words subsequently presented on a non-flickering background that were correctly identified as previously seen exhibited reinstatement effects at both entrainment frequencies. Reliability of these statistical inferences was however critically dependent on the approach used for multiple comparisons correction. We conclude that effects are not robust enough to be used as a reliable index of lexical activation during language processing.
Research knowledge in undergraduate school in Brazil: a comparison between medical and law students.
Reis Filho, Antonio José Souza; Andrade, Bruno Bezerril; Mendonça, Vitor Rosa Ramos de; Barral-Netto, Manoel
2010-09-01
Exposure to science education during college may affect a student's profile, and research experience may be associated with better professional performance. We hypothesized that the impact of research experience obtained during graduate study differs among professional curricula and among graduate courses. A validated multiple-choice questionnaire concerning scientific concepts was given to students in the first and fourth years of medical and law school at a public Brazilian educational institution. Medical students participated more frequently in introductory scientific programs than law students, and this trend increased from the first to the fourth years of study. In both curricula, fourth-year students displayed a higher percentage of correct answers than first-year students. A higher proportion of fourth-year students correctly defined the concepts of scientific hypothesis and scientific theory. In the areas of interpretation and writing of scientific papers, fourth-year students, in both curricula, felt more confident than first-year students. Although medical students felt less confident in planning and conducting research projects than law students, they were more involved in research activities. Medical graduation seems to favor the development of critical scientific maturity than law graduation. Specific policy in medical schools is a reasonable explanation for medical students' participation in more scientific activities.
NASA Astrophysics Data System (ADS)
Mantry, Sonny; Petriello, Frank
2010-05-01
We derive a factorization theorem for the Higgs boson transverse momentum (pT) and rapidity (Y) distributions at hadron colliders, using the soft-collinear effective theory (SCET), for mh≫pT≫ΛQCD, where mh denotes the Higgs mass. In addition to the factorization of the various scales involved, the perturbative physics at the pT scale is further factorized into two collinear impact-parameter beam functions (IBFs) and an inverse soft function (ISF). These newly defined functions are of a universal nature for the study of differential distributions at hadron colliders. The additional factorization of the pT-scale physics simplifies the implementation of higher order radiative corrections in αs(pT). We derive formulas for factorization in both momentum and impact parameter space and discuss the relationship between them. Large logarithms of the relevant scales in the problem are summed using the renormalization group equations of the effective theories. Power corrections to the factorization theorem in pT/mh and ΛQCD/pT can be systematically derived. We perform multiple consistency checks on our factorization theorem including a comparison with known fixed-order QCD results. We compare the SCET factorization theorem with the Collins-Soper-Sterman approach to low-pT resummation.
Volkán-Kacsó, Sándor; Marcus, Rudolph A
2016-10-25
A recently proposed chemomechanical group transfer theory of rotary biomolecular motors is applied to treat single-molecule controlled rotation experiments. In these experiments, single-molecule fluorescence is used to measure the binding and release rate constants of nucleotides by monitoring the occupancy of binding sites. It is shown how missed events of nucleotide binding and release in these experiments can be corrected using theory, with F 1 -ATP synthase as an example. The missed events are significant when the reverse rate is very fast. Using the theory the actual rate constants in the controlled rotation experiments and the corrections are predicted from independent data, including other single-molecule rotation and ensemble biochemical experiments. The effective torsional elastic constant is found to depend on the binding/releasing nucleotide, and it is smaller for ADP than for ATP. There is a good agreement, with no adjustable parameters, between the theoretical and experimental results of controlled rotation experiments and stalling experiments, for the range of angles where the data overlap. This agreement is perhaps all the more surprising because it occurs even though the binding and release of fluorescent nucleotides is monitored at single-site occupancy concentrations, whereas the stalling and free rotation experiments have multiple-site occupancy.
Cloud and Aerosol Measurements from the GLAS Polar Orbiting Lidar: First Year Results
NASA Technical Reports Server (NTRS)
Spinhirne, J. D.; Palm, S. P.; Hlavka, D. L.; Hart, W. D.; Mahesh, A.; Welton, E. J.
2004-01-01
The Geoscience Laser Altimeter System (GLAS) launched in 2003 is the first polar orbiting satellite lidar. The instrument was designed for high performance observations of the distribution and optical scattering cross sections of clouds and aerosol. GLAS is approaching six months of on orbit data operation. These data from thousands of orbits illustrate the ability of space lidar to accurately and dramatically measure the height distribution of global cloud and aerosol to an unprecedented degree. There were many intended science applications of the GLAS data and significant results have already been realized. One application is the accurate height distribution and coverage of global cloud cover with one goal of defining the limitation and inaccuracies of passive retrievals. Comparison to MODIS cloud retrievals shows notable discrepancies. Initial comparisons to NOAA 14&15 satellite cloud retrievals show basic similarity in overall cloud coverage, but important differences in height distribution. Because of the especially poor performance of passive cloud retrievals in polar regions, and partly because of high orbit track densities, the GLAS measurements are by far the most accurate measurement of Arctic and Antarctica cloud cover from space to date. Global aerosol height profiling is a fundamentally new measurement from space with multiple applications. A most important aerosol application is providing input to global aerosol generation and transport models. Another is improved measurement of aerosol optical depth. Oceanic surface energy flux derivation from PBL and LCL height measurements is another application of GLAS data that is being pursued. A special area of work for GLAS data is the correction and application of multiple scattering effects. Stretching of surface return pulses in excess of 40 m from cloud propagation effects and other interesting multiple scattering phenomena have been observed. As an EOS project instrument, GLAS data products are openly available to the science community. First year results from GLAS are summarized.
Faught, Austin M; Davidson, Scott E; Fontenot, Jonas; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S
2017-09-01
The Imaging and Radiation Oncology Core Houston (IROC-H) (formerly the Radiological Physics Center) has reported varying levels of agreement in their anthropomorphic phantom audits. There is reason to believe one source of error in this observed disagreement is the accuracy of the dose calculation algorithms and heterogeneity corrections used. To audit this component of the radiotherapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Elekta 6 MV and 10 MV therapeutic x-ray beams were commissioned based on measurement of central axis depth dose data for a 10 × 10 cm 2 field size and dose profiles for a 40 × 40 cm 2 field size. The models were validated against open field measurements consisting of depth dose data and dose profiles for field sizes ranging from 3 × 3 cm 2 to 30 × 30 cm 2 . The models were then benchmarked against measurements in IROC-H's anthropomorphic head and neck and lung phantoms. Validation results showed 97.9% and 96.8% of depth dose data passed a ±2% Van Dyk criterion for 6 MV and 10 MV models respectively. Dose profile comparisons showed an average agreement using a ±2%/2 mm criterion of 98.0% and 99.0% for 6 MV and 10 MV models respectively. Phantom plan comparisons were evaluated using ±3%/2 mm gamma criterion, and averaged passing rates between Monte Carlo and measurements were 87.4% and 89.9% for 6 MV and 10 MV models respectively. Accurate multiple source models for Elekta 6 MV and 10 MV x-ray beams have been developed for inclusion in an independent dose calculation tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.
Performance analysis of multiple PRF technique for ambiguity resolution
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Curlander, J. C.
1992-01-01
For short wavelength spaceborne synthetic aperture radar (SAR), ambiguity in Doppler centroid estimation occurs when the azimuth squint angle uncertainty is larger than the azimuth antenna beamwidth. Multiple pulse recurrence frequency (PRF) hopping is a technique developed to resolve the ambiguity by operating the radar in different PRF's in the pre-imaging sequence. Performance analysis results of the multiple PRF technique are presented, given the constraints of the attitude bound, the drift rate uncertainty, and the arbitrary numerical values of PRF's. The algorithm performance is derived in terms of the probability of correct ambiguity resolution. Examples, using the Shuttle Imaging Radar-C (SIR-C) and X-SAR parameters, demonstrate that the probability of correct ambiguity resolution obtained by the multiple PRF technique is greater than 95 percent and 80 percent for the SIR-C and X-SAR applications, respectively. The success rate is significantly higher than that achieved by the range cross correlation technique.
Is orthodontics prior to 11 years of age evidence-based? A systematic review and meta-analysis.
Sunnak, R; Johal, A; Fleming, P S
2015-05-01
To determine whether interceptive orthodontics prior to the age of 11 years is more effective than later treatment in the short- and long-term. Multiple electronic databases were searched, authors were contacted as required and reference lists of included studies were screened. Randomised and quasi-randomised controlled trials were included, comparing children under the age of 11 years requiring interceptive orthodontic correction for a range of occlusal problems, to an untreated or positive control group. Data extraction and quality assessment were performed independently and in duplicate. Twenty-two studies were potentially eligible for meta-analysis, the majority related to growth modification. Other outcomes considered included correction of unilateral posterior crossbite, anterior openbite, extractions and ectopic maxillary canines. Meta-analysis was possible for 11 comparisons. For Class II correction in the short-term, meta-analyses demonstrated a statistically significant reduction in ANB (-1.4 degrees, 95 CI: -2.17, -0.64) and overjet (-5.81mm, 95 CI: -6.37, -5.25) with both functional appliances and headgear versus control. In the long-term, however, statistical significance was not found for the same outcomes. Treatment duration was prolonged with both functional appliances (6.85 months, 95 CI: 3.24, 10.45) and headgear (12.47 months, 95 CI: 8.67, 16.26) compared to adolescent treatments. Meta-analyses were not possible for comparisons of other interceptive treatments due to heterogeneity and methodological limitations. The results suggest a lack of evidence to prove that early treatment carries additional benefit over and above that achieved with treatment commencing later; however, this does not necessarily imply that early treatment is ineffective. Further high quality trials are required to assess the effectiveness of early treatment compared to later intervention. Interceptive orthodontics is variously recommended for a range of malocclusions both of skeletal and dental aetiology. The merits of interceptive treatment, however, are often disputed. Further high quality trials are required to assess the effectiveness of early treatment compared to later intervention. Copyright © 2015 Elsevier Ltd. All rights reserved.
Schönbichler, S A; Bittner, L K H; Weiss, A K H; Griesser, U J; Pallua, J D; Huck, C W
2013-08-01
The aim of this study was to evaluate the ability of near-infrared chemical imaging (NIR-CI), near-infrared (NIR), Raman and attenuated-total-reflectance infrared (ATR-IR) spectroscopy to quantify three polymorphic forms (I, II, III) of furosemide in ternary powder mixtures. For this purpose, partial least-squares (PLS) regression models were developed, and different data preprocessing algorithms such as normalization, standard normal variate (SNV), multiplicative scatter correction (MSC) and 1st to 3rd derivatives were applied to reduce the influence of systematic disturbances. The performance of the methods was evaluated by comparison of the standard error of cross-validation (SECV), R(2), and the ratio performance deviation (RPD). Limits of detection (LOD) and limits of quantification (LOQ) of all methods were determined. For NIR-CI, a SECVcorr-spec and a SECVsingle-pixel corrected were calculated to assess the loss of accuracy by taking advantage of the spatial information. NIR-CI showed a SECVcorr-spec (SECVsingle-pixel corrected) of 2.82% (3.71%), 3.49% (4.65%), and 4.10% (5.06%) for form I, II, III. NIR had a SECV of 2.98%, 3.62%, and 2.75%, and Raman reached 3.25%, 3.08%, and 3.18%. The SECV of the ATR-IR models were 7.46%, 7.18%, and 12.08%. This study proves that NIR-CI, NIR, and Raman are well suited to quantify forms I-III of furosemide in ternary mixtures. Because of the pressure-dependent conversion of form II to form I, ATR-IR was found to be less appropriate for an accurate quantification of the mixtures. In this study, the capability of NIR-CI for the quantification of polymorphic ternary mixtures was compared with conventional spectroscopic techniques for the first time. For this purpose, a new way of spectra selection was chosen, and two kinds of SECVs were calculated to achieve a better comparability of NIR-CI to NIR, Raman, and ATR-IR. Copyright © 2013 Elsevier B.V. All rights reserved.
ABO, Secretor and Lewis histo-blood group systems influence the digestive form of Chagas disease.
Bernardo, Cássia Rubia; Camargo, Ana Vitória Silveira; Ronchi, Luís Sérgio; de Oliveira, Amanda Priscila; de Campos Júnior, Eumildo; Borim, Aldenis Albaneze; Brandão de Mattos, Cinara Cássia; Bestetti, Reinaldo Bulgarelli; de Mattos, Luiz Carlos
2016-11-01
Chagas disease, caused by Trypanosoma cruzi, can affect the heart, esophagus and colon. The reasons that some patients develop different clinical forms or remain asymptomatic are unclear. It is believed that tissue immunogenetic markers influence the tropism of T. cruzi for different organs. ABO, Secretor and Lewis histo-blood group systems express a variety of tissue carbohydrate antigens that influence the susceptibility or resistance to diseases. This study aimed to examine the association of ABO, secretor and Lewis histo-blood systems with the clinical forms of Chagas disease. We enrolled 339 consecutive adult patients with chronic Chagas disease regardless of gender (cardiomyopathy: n=154; megaesophagus: n=119; megacolon: n=66). The control group was composed by 488 healthy blood donors. IgG anti-T. cruzi antibodies were detected by ELISA. ABO and Lewis phenotypes were defined by standard hemagglutination tests. Secretor (FUT2) and Lewis (FUT3) genotypes, determined by Polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP), were used to infer the correct histo-blood group antigens expressed in the gastrointestinal tract. The proportions between groups were compared using the χ2 test with Yates correction and Fisher's exact test and the Odds Ratio (OR) and 95% Confidence Interval (95% CI) were calculated. An alpha error of 5% was considered significant with p-values <0.05 being corrected for multiple comparisons (pc). No statistically significant differences were found for the ABO (X 2 : 2.635; p-value=0.451), Secretor (X 2 : 0.056; p-value=0.812) or Lewis (X 2 : 2.092; p-value=0.351) histo-blood group phenotypes between patients and controls. However, B plus AB Secretor phenotypes were prevalent in pooled data from megaesophagus and megacolon patients (OR: 5.381; 95% CI: 1.230-23.529; p-value=0.011; pc=0.022) in comparison to A plus O Secretor phenotypes. The tissue antigen variability resulting from the combined action of ABO and Secretor histo-blood systems is associated with the digestive forms of Chagas disease. Copyright © 2016 Elsevier B.V. All rights reserved.
Land Surface Albedo from MERIS Reflectances Using MODIS Directional Factors
NASA Technical Reports Server (NTRS)
Schaaf, Crystal L. B.; Gao, Feng; Strahler, Alan H.
2004-01-01
MERIS Level 2 surface reflectance products are now available to the scientific community. This paper demonstrates the production of MERIS-derived surface albedo and Nadir Bidirectional Reflectance Distribution Function (BRDF) adjusted reflectances by coupling the MERIS data with MODIS BRDF products. Initial efforts rely on the specification of surface anisotropy as provided by the global MODIS BRDF product for a first guess of the shape of the BRDF and then make use all of the coincidently available, partially atmospherically corrected, cloud cleared, MERIS observations to generate MERIS-derived BRDF and surface albedo quantities for each location. Comparisons between MODIS (aerosol-corrected) and MERIS (not-yet aerosol-corrected) surface values from April and May 2003 are also presented for case studies in Spain and California as well as preliminary comparisons with field data from the Devil's Rock Surfrad/BSRN site.
Deriving Albedo from Coupled MERIS and MODIS Surface Products
NASA Technical Reports Server (NTRS)
Gao, Feng; Schaaf, Crystal; Jin, Yu-Fang; Lucht, Wolfgang; Strahler, Alan
2004-01-01
MERIS Level 2 surface reflectance products are now available to the scientific community. This paper demonstrates the production of MERIS-derived surface albedo and Nadir Bidirectional Reflectance Distribution Function (BRDF) adjusted reflectances by coupling the MERIS data with MODIS BRDF products. Initial efforts rely on the specification of surface anisotropy as provided by the global MODIS BRDF product for a first guess of the shape of the BRDF and then make use all of the coincidently available, partially atmospherically corrected, cloud cleared, MERIS observations to generate MERIS-derived BRDF and surface albedo quantities for each location. Comparisons between MODIS (aerosol-corrected) and MERIS (not-yet aerosol-corrected) surface values from April and May 2003 are also presented for case studies in Spain and California as well as preliminary comparisons with field data from the Devil's Rock Surfrad/BSRN site.
NASA Astrophysics Data System (ADS)
Liu, J.; Lu, W. Q.
2010-03-01
This paper presents the detailed MD simulation on the properties including the thermal conductivities and viscosities of the quantum fluid helium at different state points. The molecular interactions are represented by the Lennard-Jones pair potentials supplemented by quantum corrections following the Feynman-Hibbs approach and the properties are calculated using the Green-Kubo equations. A comparison is made among the numerical results using LJ and QFH potentials and the existing database and shows that the LJ model is not quantitatively correct for the supercritical liquid helium, thereby the quantum effect must be taken into account when the quantum fluid helium is studied. The comparison of the thermal conductivity is also made as a function of temperatures and pressure and the results show quantum effect correction is an efficient tool to get the thermal conductivities.
Quantum Loop Expansion to High Orders, Extended Borel Summation, and Comparison with Exact Results
NASA Astrophysics Data System (ADS)
Noreen, Amna; Olaussen, Kåre
2013-07-01
We compare predictions of the quantum loop expansion to (essentially) infinite orders with (essentially) exact results in a simple quantum mechanical model. We find that there are exponentially small corrections to the loop expansion, which cannot be explained by any obvious “instanton”-type corrections. It is not the mathematical occurrence of exponential corrections but their seeming lack of any physical origin which we find surprising and puzzling.
AIDS: The Impact on the Criminal Justice System Management of Aids in Corrections.
1991-01-01
interferon and some antibiotics, cause weakness and depression, which only adds to the patient’s feeling of dysphoria . In addition, some treatments ...science and ourselves will sharply redefine the future of AIDS treatment in correctional institutions. Sources of Data The data utilized in this...an effort to help define and evaluate current treatment rationale used in the correctional setting in comparison to that of "normal" society
Kan, Zhong-Yuan; Walters, Benjamin T.; Mayne, Leland; Englander, S. Walter
2013-01-01
Hydrogen exchange technology provides a uniquely powerful instrument for measuring protein structural and biophysical properties, quantitatively and in a nonperturbing way, and determining how these properties are implemented to produce protein function. A developing hydrogen exchange–mass spectrometry method (HX MS) is able to analyze large biologically important protein systems while requiring only minuscule amounts of experimental material. The major remaining deficiency of the HX MS method is the inability to deconvolve HX results to individual amino acid residue resolution. To pursue this goal we used an iterative optimization program (HDsite) that integrates recent progress in multiple peptide acquisition together with previously unexamined isotopic envelope-shape information and a site-resolved back-exchange correction. To test this approach, residue-resolved HX rates computed from HX MS data were compared with extensive HX NMR measurements, and analogous comparisons were made in simulation trials. These tests found excellent agreement and revealed the important computational determinants. PMID:24019478
A comparison of genetic variants between proficient low- and high-risk sport participants.
Thomson, Cynthia J; Power, Rebecca J; Carlson, Scott R; Rupert, Jim L; Michel, Grégory
2015-01-01
Athletes participating in high-risk sports consistently report higher scores on sensation-seeking measures than do low-risk athletes or non-athletic controls. To determine whether genetic variants commonly associated with sensation seeking were over-represented in such athletes, proficient practitioners of high-risk (n = 141) and low-risk sports (n = 132) were compared for scores on sensation seeking and then genotyped at 33 polymorphic loci in 14 candidate genes. As expected, athletes participating in high-risk sports score higher on sensation seeking than did low-risk sport athletes (P < .01). Genotypes were associated with high-risk sport participation for two genes (stathmin, (P = .004) and brain-derived neurotrophic factor (P = .03)) as well as when demographically matched subsets of the sport cohorts were compared (P < .05); however, in all cases, associations did not survive correction for multiple testing.
imDEV: a graphical user interface to R multivariate analysis tools in Microsoft Excel.
Grapov, Dmitry; Newman, John W
2012-09-01
Interactive modules for Data Exploration and Visualization (imDEV) is a Microsoft Excel spreadsheet embedded application providing an integrated environment for the analysis of omics data through a user-friendly interface. Individual modules enables interactive and dynamic analyses of large data by interfacing R's multivariate statistics and highly customizable visualizations with the spreadsheet environment, aiding robust inferences and generating information-rich data visualizations. This tool provides access to multiple comparisons with false discovery correction, hierarchical clustering, principal and independent component analyses, partial least squares regression and discriminant analysis, through an intuitive interface for creating high-quality two- and a three-dimensional visualizations including scatter plot matrices, distribution plots, dendrograms, heat maps, biplots, trellis biplots and correlation networks. Freely available for download at http://sourceforge.net/projects/imdev/. Implemented in R and VBA and supported by Microsoft Excel (2003, 2007 and 2010).
NASA Technical Reports Server (NTRS)
Baumgardner, M. F. (Principal Investigator)
1974-01-01
The author has identified the following significant results. Multispectral scanner data obtained by ERTS-1 over six test sites in the Central United States were analyzed and interpreted. ERTS-1 data for some of the test sites were geometrically corrected and temporally overlayed. Computer-implemented pattern recognition techniques were used in the analysis of all multispectral data. These techniques were used to evaluate ERTS-1 data as a tool for soil survey. Geology maps and land use inventories were prepared by digital analysis of multispectral data. Identification and mapping of crop species and rangelands were achieved throught the analysis of 1972 and 1973 ERTS-1 data. Multiple dates of ERTS-1 data were examined to determine the variation with time of the areal extent of surface water resources on the Southern Great Plain.
An empirical model for polarized and cross-polarized scattering from a vegetation layer
NASA Technical Reports Server (NTRS)
Liu, H. L.; Fung, A. K.
1988-01-01
An empirical model for scattering from a vegetation layer above an irregular ground surface is developed in terms of the first-order solution for like-polarized scattering and the second-order solution for cross-polarized scattering. The effects of multiple scattering within the layer and at the surface-volume boundary are compensated by using a correction factor based on the matrix doubling method. The major feature of this model is that all parameters in the model are physical parameters of the vegetation medium. There are no regression parameters. Comparisons of this empirical model with theoretical matrix-doubling method and radar measurements indicate good agreements in polarization, angular trends, and k sub a up to 4, where k is the wave number and a is the disk radius. The computational time is shortened by a factor of 8, relative to the theoretical model calculation.
A multivariate analysis of sex offender recidivism.
Scalora, Mario J; Garbin, Calvin
2003-06-01
Sex offender recidivism risk is a multifaceted phenomenon requiring consideration across multiple risk factor domains. The impact of treatment involvement and subsequent recidivism is given limited attention in comparison to other forensic mental health issues. The present analysis is a retrospective study of sex offenders treated at a secure facility utilizing a cognitive-behavioral program matched with an untreated correctional sample. Variables studied included demographic, criminal history, offense related, and treatment progress. Recidivism was assessed through arrest data. Multivariate analysis suggests that recidivism is significantly related to quality of treatment involvement, offender demographics, offense characteristics, and criminal history. Successfully treated offenders were significantly less likely to subsequently reoffend. Recidivists were also significantly younger, less likely married, had engaged in more victim grooming or less violent offending behavior, and had significantly more prior property charges. The authors discuss the clinical and policy implications of the interrelationship between treatment involvement and recidivism.
Dissociation, childhood trauma, and ataque de nervios among Puerto Rican psychiatric outpatients.
Lewis-Fernández, Roberto; Garrido-Castillo, Pedro; Bennasar, Mari Carmen; Parrilla, Elsie M; Laria, Amaro J; Ma, Guoguang; Petkova, Eva
2002-09-01
This study examined the relationships of dissociation and childhood trauma with ataque de nervios. Forty Puerto Rican psychiatric outpatients were evaluated for frequency of ataque de nervios, dissociative symptoms, exposure to trauma, and mood and anxiety psychopathology. Blind conditions were maintained across assessments. Data for 29 female patients were analyzed. Among these 29 patients, clinician-rated dissociative symptoms increased with frequency of ataque de nervios. Dissociative Experiences Scale scores and diagnoses of panic disorder and dissociative disorders were also associated with ataque frequency, before corrections were made for multiple comparisons. The rate of childhood trauma was uniformly high among the patients and showed no relationship to dissociative symptoms and disorder or number of ataques. Frequent ataques de nervios may, in part, be a marker for psychiatric disorders characterized by dissociative symptoms. Childhood trauma per se did not account for ataque status in this group of female outpatients.
EEG and MEG data analysis in SPM8.
Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl
2011-01-01
SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools.
A comparison of experimental and calculated thin-shell leading-edge buckling due to thermal stresses
NASA Technical Reports Server (NTRS)
Jenkins, Jerald M.
1988-01-01
High-temperature thin-shell leading-edge buckling test data are analyzed using NASA structural analysis (NASTRAN) as a finite element tool for predicting thermal buckling characteristics. Buckling points are predicted for several combinations of edge boundary conditions. The problem of relating the appropriate plate area to the edge stress distribution and the stress gradient is addressed in terms of analysis assumptions. Local plasticity was found to occur on the specimen analyzed, and this tended to simplify the basic problem since it effectively equalized the stress gradient from loaded edge to loaded edge. The initial loading was found to be difficult to select for the buckling analysis because of the transient nature of thermal stress. Multiple initial model loadings are likely required for complicated thermal stress time histories before a pertinent finite element buckling analysis can be achieved. The basic mode shapes determined from experimentation were correctly identified from computation.
Total prompt γ-ray emission in fission
NASA Astrophysics Data System (ADS)
Wu, C. Y.; Chyzh, A.; Kwan, E.; Henderson, R. A.; Bredeweg, T. A.; Haight, R. C.; Hayes-Sterbenz, A. C.; Lee, H. Y.; O'Donnell, J. M.; Ullmann, J. L.
2017-09-01
The total prompt γ-ray energy distributions were measured for the neutron-induced fission of 235U, 239,241Pu at incident neutron energy of 0.025 eV-100 keV, and the spontaneous fission of 252Cf using the Detector for Advanced Neutron Capture Experiments (DANCE) array in coincidence with the detection of fission fragments by a parallel-plate avalanche counter. Corrections were made to the measured distribution by unfolding the two-dimension spectrum of total prompt γ-ray energy vs multiplicity using a simulated DANCE response matrix. A summary of this work is presented with the emphasis on the comparison of total prompt fission γ-ray energy between our results and previous ones. The mean values of the total prompt γ-ray energy ⟨Eγ,tot⟩, determined from the unfolded distributions, are ˜20% higher than those derived from measurements using single γ-ray detector for all the fissile nuclei studied.
Color constancy by characterization of illumination chromaticity
NASA Astrophysics Data System (ADS)
Nikkanen, Jarno T.
2011-05-01
Computational color constancy algorithms play a key role in achieving desired color reproduction in digital cameras. Failure to estimate illumination chromaticity correctly will result in invalid overall colour cast in the image that will be easily detected by human observers. A new algorithm is presented for computational color constancy. Low computational complexity and low memory requirement make the algorithm suitable for resource-limited camera devices, such as consumer digital cameras and camera phones. Operation of the algorithm relies on characterization of the range of possible illumination chromaticities in terms of camera sensor response. The fact that only illumination chromaticity is characterized instead of the full color gamut, for example, increases robustness against variations in sensor characteristics and against failure of diagonal model of illumination change. Multiple databases are used in order to demonstrate the good performance of the algorithm in comparison to the state-of-the-art color constancy algorithms.
Plante, David T.; Landsness, Eric C.; Peterson, Michael J.; Goldstein, Michael R.; Wanger, Tim; Guokas, Jeff J.; Tononi, Giulio; Benca, Ruth M.
2012-01-01
Hypersomnolence in major depressive disorder (MDD) plays an important role in the natural history of the disorder, but the basis of hypersomnia in MDD is poorly understood. Slow wave activity (SWA) has been associated with sleep homeostasis, as well as sleep restoration and maintenance, and may be altered in MDD. Therefore, we conducted a post-hoc study that utilized high density electroencephalography (hdEEG) to test the hypothesis that MDD subjects with hypersomnia (HYS+) would have decreased SWA relative to age and sex-matched MDD subjects without hypersomnia (HYS−) and healthy controls (n=7 for each group). After correcting for multiple comparisons using statistical non-parametric mapping, HYS+ subjects demonstrated significantly reduced parieto-occipital all-night SWA relative to HYS− subjects. Our results suggest hypersomnolence may be associated with topographic reductions in SWA in MDD. Further research using adequately powered prospective design is indicated to confirm these findings. PMID:22512951
EEG and MEG Data Analysis in SPM8
Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl
2011-01-01
SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools. PMID:21437221
Laurin, Nancy; Milot, Emmanuel
2014-03-01
Allele frequencies and forensically relevant population statistics were estimated for the short tandem repeat (STR) loci of the AmpFℓSTR® Identifiler® Plus and PowerPlex® 16 HS amplification kits, including D2S1338, D19S433, Penta D, and Penta E, for three First Nations Aboriginal populations and for Caucasians in Canada. The cumulative power of discrimination was ≥ 0.999999999999984 and the cumulative power of exclusion was ≥ 0.999929363 for both amplification systems in all populations. No significant departure from Hardy-Weinberg equilibrium was detected for D2S1338, D19S433, Penta D, and Penta E or the 13 Combined DNA Index System core STR loci after correction for multiple testing. Significant genetic diversity was observed between these four populations. Comparison with published frequency data for other populations is also presented.
Lucht, Michael J; Barnow, Sven; Sonnenfeld, Christine; Ulrich, Ines; Grabe, Hans Joergen; Schroeder, Winnie; Völzke, Henry; Freyberger, Harald J; John, Ulrich; Herrmann, Falko H; Kroemer, Heyo; Rosskopf, Dieter
2013-02-01
The application of intranasal oxytocin enhances facial emotion recognition in normal subjects and in subjects with autism spectrum disorders (ASD). In addition, various features of social cognition have been associated with variants of the oxytocin receptor gene (OXTR). Therefore, we tested for associations between mind-reading, a measure for social recognition and OXTR polymorphisms. 76 healthy adolescents and young adults were tested for associations between OXTR rs53576, rs2254298, rs2228485 and mind-reading using the "Reading the Mind in the Eyes Test" (RMET). After Bonferroni correction for multiple comparisons, rs2228485 was associated with the number of incorrect answers when subjects evaluated male faces (P =0.000639). There were also associations between OXTR rs53576, rs2254298 and rs2228485 and other RMET dimensions according to P <0.05 (uncorrected). This study adds further evidence to the hypothesis that genetic variations in the OXTR modulate mind-reading and social behaviour.
Pilot Comparison of Radiance Temperature Scale Realization Between NIMT and NMIJ
NASA Astrophysics Data System (ADS)
Keawprasert, T.; Yamada, Y.; Ishii, J.
2015-03-01
A pilot comparison of radiance temperature scale realizations between the National Institute of Metrology Thailand (NIMT) and the National Metrology Institute of Japan (NMIJ) was conducted. At the two national metrology institutes (NMIs), a 900 nm radiation thermometer, used as the transfer artifact, was calibrated by a means of a multiple fixed-point method using the fixed-point blackbody of Zn, Al, Ag, and Cu points, and by means of relative spectral responsivity measurements according to the International Temperature Scale of 1990 (ITS-90) definition. The Sakuma-Hattori equation is used for interpolating the radiance temperature scale between the four fixed points and also for extrapolating the ITS-90 temperature scale to 2000 C. This paper compares the calibration results in terms of fixed-point measurements, relative spectral responsivity, and finally the radiance temperature scale. Good agreement for the fixed-point measurements was found in case a correction for the change of the internal temperature of the artifact was applied using the temperature coefficient measured at the NMIJ. For the realized radiance temperature range from 400 C to 1100 C, the resulting scale differences between the two NMIs are well within the combined scale comparison uncertainty of 0.12 C (). The resulting spectral responsivity measured at the NIMT has a comparable curve to that measured at the NMIJ especially in the out-of-band region, yielding a ITS-90 scale difference within 1.0 C from the Cu point to 2000 C, whereas the realization comparison uncertainty of NIMT and NMIJ combined is 1.2 C () at 2000 C.
Correction to: A Comparison of the Energetic Cost of Running in Marathon Racing Shoes.
Hoogkamer, Wouter; Kipp, Shalaya; Frank, Jesse H; Farina, Emily M; Luo, Geng; Kram, Rodger
2018-06-01
An Online First version of this article was made available online at https://link.springer.com/article/10.1007/s40279-017-0811-2 on 16 November 2017. An error was subsequently identified in the article, and the following correction should be noted.
Hu, Eric M; Zhang, Andrew; Silverman, Stuart G; Pedrosa, Ivan; Wang, Zhen J; Smith, Andrew D; Chandarana, Hersh; Doshi, Ankur; Shinagare, Atul B; Remer, Erick M; Kaffenberger, Samuel D; Miller, David C; Davenport, Matthew S
2018-05-16
The original version of this article contained an error in author name. The co-author's name was published as Ivan M. Pedrosa, instead it should be Ivan Pedrosa. The original article has been corrected.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geist, William H.
2015-12-01
This set of slides begins by giving background and a review of neutron counting; three attributes of a verification item are discussed: 240Pu eff mass; α, the ratio of (α,n) neutrons to spontaneous fission neutrons; and leakage multiplication. It then takes up neutron detector systems – theory & concepts (coincidence counting, moderation, die-away time); detector systems – some important details (deadtime, corrections); introduction to multiplicity counting; multiplicity electronics and example distributions; singles, doubles, and triples from measured multiplicity distributions; and the point model: multiplicity mathematics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
2017-06-13
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less
A Simple Illustration for the Need of Multiple Comparison Procedures
ERIC Educational Resources Information Center
Carter, Rickey E.
2010-01-01
Statistical adjustments to accommodate multiple comparisons are routinely covered in introductory statistical courses. The fundamental rationale for such adjustments, however, may not be readily understood. This article presents a simple illustration to help remedy this.
Micalos, Peter S; Korgaonkar, Mayuresh S; Drinkwater, Eric J; Cannon, Jack; Marino, Frank E
2014-01-01
Objective The purpose of this research was to assess the functional brain activity and perceptual rating of innocuous somatic pressure stimulation before and after exercise rehabilitation in patients with chronic pain. Materials and methods Eleven chronic pain patients and eight healthy pain-free controls completed 12 weeks of supervised aerobic exercise intervention. Perceptual rating of standardized somatic pressure stimulation (2 kg) on the right anterior mid-thigh and brain responses during functional magnetic resonance imaging (fMRI) were assessed at pre- and postexercise rehabilitation. Results There was a significant difference in the perceptual rating of innocuous somatic pressure stimulation between the chronic pain and control groups (P=0.02) but no difference following exercise rehabilitation. Whole brain voxel-wise analysis with correction for multiple comparisons revealed trends for differences in fMRI responses between the chronic pain and control groups in the superior temporal gyrus (chronic pain > control, corrected P=0.30), thalamus, and caudate (control > chronic, corrected P=0.23). Repeated measures of the regions of interest (5 mm radius) for blood oxygen level-dependent signal response revealed trend differences for superior temporal gyrus (P=0.06), thalamus (P=0.04), and caudate (P=0.21). Group-by-time interactions revealed trend differences in the caudate (P=0.10) and superior temporal gyrus (P=0.29). Conclusion Augmented perceptual and brain responses to innocuous somatic pressure stimulation were shown in the chronic pain group compared to the control group; however, 12-weeks of exercise rehabilitation did not significantly attenuate these responses. PMID:25210471
Multimodal Randomized Functional MR Imaging of the Effects of Methylene Blue in the Human Brain.
Rodriguez, Pavel; Zhou, Wei; Barrett, Douglas W; Altmeyer, Wilson; Gutierrez, Juan E; Li, Jinqi; Lancaster, Jack L; Gonzalez-Lima, Francisco; Duong, Timothy Q
2016-11-01
Purpose To investigate the sustained-attention and memory-enhancing neural correlates of the oral administration of methylene blue in the healthy human brain. Materials and Methods The institutional review board approved this prospective, HIPAA-compliant, randomized, double-blinded, placebo-controlled clinical trial, and all patients provided informed consent. Twenty-six subjects (age range, 22-62 years) were enrolled. Functional magnetic resonance (MR) imaging was performed with a psychomotor vigilance task (sustained attention) and delayed match-to-sample tasks (short-term memory) before and 1 hour after administration of low-dose methylene blue or a placebo. Cerebrovascular reactivity effects were also measured with the carbon dioxide challenge, in which a 2 × 2 repeated-measures analysis of variance was performed with a drug (methylene blue vs placebo) and time (before vs after administration of the drug) as factors to assess drug × time between group interactions. Multiple comparison correction was applied, with cluster-corrected P < .05 indicating a significant difference. Results Administration of methylene blue increased response in the bilateral insular cortex during a psychomotor vigilance task (Z = 2.9-3.4, P = .01-.008) and functional MR imaging response during a short-term memory task involving the prefrontal, parietal, and occipital cortex (Z = 2.9-4.2, P = .03-.0003). Methylene blue was also associated with a 7% increase in correct responses during memory retrieval (P = .01). Conclusion Low-dose methylene blue can increase functional MR imaging activity during sustained attention and short-term memory tasks and enhance memory retrieval. © RSNA, 2016 Online supplemental material is available for this article.
Li, Zhiguang; Kwekel, Joshua C; Chen, Tao
2012-01-01
Functional comparison across microarray platforms is used to assess the comparability or similarity of the biological relevance associated with the gene expression data generated by multiple microarray platforms. Comparisons at the functional level are very important considering that the ultimate purpose of microarray technology is to determine the biological meaning behind the gene expression changes under a specific condition, not just to generate a list of genes. Herein, we present a method named percentage of overlapping functions (POF) and illustrate how it is used to perform the functional comparison of microarray data generated across multiple platforms. This method facilitates the determination of functional differences or similarities in microarray data generated from multiple array platforms across all the functions that are presented on these platforms. This method can also be used to compare the functional differences or similarities between experiments, projects, or laboratories.
Retrieval of Aerosol Optical Depth Under Thin Cirrus from MODIS: Application to an Ocean Algorithm
NASA Technical Reports Server (NTRS)
Lee, Jaehwa; Hsu, Nai-Yung Christina; Sayer, Andrew Mark; Bettenhausen, Corey
2013-01-01
A strategy for retrieving aerosol optical depth (AOD) under conditions of thin cirrus coverage from the Moderate Resolution Imaging Spectroradiometer (MODIS) is presented. We adopt an empirical method that derives the cirrus contribution to measured reflectance in seven bands from the visible to shortwave infrared (0.47, 0.55, 0.65, 0.86, 1.24, 1.63, and 2.12 µm, commonly used for AOD retrievals) by using the correlations between the top-of-atmosphere (TOA) reflectance at 1.38 micron and these bands. The 1.38 micron band is used due to its strong absorption by water vapor and allows us to extract the contribution of cirrus clouds to TOA reflectance and create cirrus-corrected TOA reflectances in the seven bands of interest. These cirrus-corrected TOA reflectances are then used in the aerosol retrieval algorithm to determine cirrus-corrected AOD. The cirrus correction algorithm reduces the cirrus contamination in the AOD data as shown by a decrease in both magnitude and spatial variability of AOD over areas contaminated by thin cirrus. Comparisons of retrieved AOD against Aerosol Robotic Network observations at Nauru in the equatorial Pacific reveal that the cirrus correction procedure improves the data quality: the percentage of data within the expected error +/-(0.03 + 0.05 ×AOD) increases from 40% to 80% for cirrus-corrected points only and from 80% to 86% for all points (i.e., both corrected and uncorrected retrievals). Statistical comparisons with Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) retrievals are also carried out. A high correlation (R = 0.89) between the CALIOP cirrus optical depth and AOD correction magnitude suggests potential applicability of the cirrus correction procedure to other MODIS-like sensors.
Examination of association to autism of common genetic variationin genes related to dopamine.
Anderson, B M; Schnetz-Boutaud, N; Bartlett, J; Wright, H H; Abramson, R K; Cuccaro, M L; Gilbert, J R; Pericak-Vance, M A; Haines, J L
2008-12-01
Autism is a severe neurodevelopmental disorder characterized by a triad of complications. Autistic individuals display significant disturbances in language and reciprocal social interactions, combined with repetitive and stereotypic behaviors. Prevalence studies suggest that autism is more common than originally believed, with recent estimates citing a rate of one in 150. Although multiple genetic linkage and association studies have yielded multiple suggestive genes or chromosomal regions, a specific risk locus has yet to be identified and widely confirmed. Because many etiologies have been suggested for this complex syndrome, we hypothesize that one of the difficulties in identifying autism genes is that multiple genetic variants may be required to significantly increase the risk of developing autism. Thus, we took the alternative approach of examining 14 prominent dopamine pathway candidate genes for detailed study by genotyping 28 single nucleotide polymorphisms. Although we did observe a nominally significant association for rs2239535 (P=0.008) on chromosome 20, single-locus analysis did not reveal any results as significant after correction for multiple comparisons. No significant interaction was identified when Multifactor Dimensionality Reduction was employed to test specifically for multilocus effects. Although genome-wide linkage scans in autism have provided support for linkage to various loci along the dopamine pathway, our study does not provide strong evidence of linkage or association to any specific gene or combination of genes within the pathway. These results demonstrate that common genetic variation within the tested genes located within this pathway at most play a minor to moderate role in overall autism pathogenesis.
Aggression against Women by Men: Sexual and Spousal Assault.
ERIC Educational Resources Information Center
Dewhurst, Ann Marie; And Others
1992-01-01
Compared 19 sexual offenders, 22 batterers, 10 violent community comparison subjects, and 21 community comparison subjects on demographic, personality, and attitudinal variables. Discriminating variables correctly classified 75 percent of participants. Hostility toward women and depression were two best discriminating variables, suggesting that…
New decoding methods of interleaved burst error-correcting codes
NASA Astrophysics Data System (ADS)
Nakano, Y.; Kasahara, M.; Namekawa, T.
1983-04-01
A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.
Emergent, untrained stimulus relations in many-to-one matching-to-sample discriminations in rats.
Nakagawa, Esho
2005-03-01
The present experiment investigated whether rats formed emergent, untrained stimulus relations in many-to-one matching-to-sample discriminations. In Phase 1, rats were trained to match two samples (triangle and horizontal stripes) to a common comparison (horizontal stripes) and two additional samples (circle or vertical stripes) to another comparison (vertical stripes). Then, in Phase 2, the rats were trained to match the one sample (triangle) to a new comparison (black) and the other sample (circle) to another comparison (white). In the Phase 3 test, half the rats (consistent group) were given two new tasks in which the sample-correct comparison relation was consistent with any emergent stimulus relations that previously may have been learned. The remaining 6 rats (inconsistent group) were given two new tasks in which the sample-correct comparison relation was not consistent with any previously learned emergent stimulus relations. Rats in the consistent group showed more accurate performance at the start of Phase 3, and faster learning to criterion in this phase, as compared with rats in the inconsistent group. This finding suggests that rats may form emergent, untrained stimulus relations between the discriminative stimuli in many-to-one matching-to-sample discriminations.
Multiple needle puncturing: balancing the varus knee.
Bellemans, Johan
2011-09-09
The so-called "pie crusting" technique using multiple stab incisions is a well-established procedure for correcting tightness of the iliotibial band in the valgus knee. It is, however, not applicable for balancing the medial side in varus knees because of the risk for iatrogenic transsection of the medial collateral ligament (MCL). This article presents our experience with a safer alternative and minimally invasive technique for medial soft tissue balancing, where we make multiple punctures in the MCL using a 19-gauge needle to progressively stretch the MCL until a correct ligament balance is achieved. Our technique requires minimal to no additional soft tissue dissection and can even be performed percutaneously when necessary. This technique, therefore, does not impact the length of the skin or soft tissue incisions. We analyzed 61 cases with varus deformity that were intraoperatively treated using this technique. In 4 other cases, the technique was used as a percutaneous procedure to correct postoperative medial tightness that caused persistent pain on the medial side. The procedure was considered successful when a 2- to 4-mm mediolateral joint line opening was obtained in extension and 2 to 6 mm in flexion. In 62 cases (95%), a progressive correction of medial tightness was achieved according to the above-described criteria. Three cases were overreleased and required compensatory release of the lateral structures and use of a thicker insert. Based on these results, we consider needle puncturing an effective and safe technique for progressive correction of MCL tightness during minimally invasive total knee arthroplasty. Copyright 2011, SLACK Incorporated.
Walkowski, Slawomir; Lundin, Mikael; Szymas, Janusz; Lundin, Johan
2015-01-01
The way of viewing whole slide images (WSI) can be tracked and analyzed. In particular, it can be useful to learn how medical students view WSIs during exams and how their viewing behavior is correlated with correctness of the answers they give. We used software-based view path tracking method that enabled gathering data about viewing behavior of multiple simultaneous WSI users. This approach was implemented and applied during two practical exams in oral pathology in 2012 (88 students) and 2013 (91 students), which were based on questions with attached WSIs. Gathered data were visualized and analyzed in multiple ways. As a part of extended analysis, we tried to use machine learning approaches to predict correctness of students' answers based on how they viewed WSIs. We compared the results of analyses for years 2012 and 2013 - done for a single question, for student groups, and for a set of questions. The overall patterns were generally consistent across these 3 years. Moreover, viewing behavior data appeared to have certain potential for predicting answers' correctness and some outcomes of machine learning approaches were in the right direction. However, general prediction results were not satisfactory in terms of precision and recall. Our work confirmed that the view path tracking method is useful for discovering viewing behavior of students analyzing WSIs. It provided multiple useful insights in this area, and general results of our analyses were consistent across two exams. On the other hand, predicting answers' correctness appeared to be a difficult task - students' answers seem to be often unpredictable.
ERIC Educational Resources Information Center
Lo, Ya-yu; Starling, A. Leyf Peirce
2009-01-01
This study examined the effects of a graphing task analysis using the Microsoft[R] Office Excel 2007 program on the single-subject multiple baseline graphing skills of three university graduate students. Using a multiple probe across participants design, the study demonstrated a functional relationship between the number of correct graphing…
Do Streaks Matter in Multiple-Choice Tests?
ERIC Educational Resources Information Center
Kiss, Hubert János; Selei, Adrienn
2018-01-01
Success in life is determined to a large extent by school performance, which in turn depends heavily on grades obtained in exams. In this study, we investigate a particular type of exam: multiple-choice tests. More concretely, we study if patterns of correct answers in multiple-choice tests affect performance. We design an experiment to study if…
NASA Astrophysics Data System (ADS)
Zherebtsov, O. M.; Shabaev, V. M.; Yerokhin, V. A.
2000-12-01
Third-order interelectronic-interaction correction to the energies of (1 s) 22 s and (1 s) 22 p1/2 states of high- Z lithiumlike ions is evaluated within the Breit approximation in the range 20⩽ Z⩽100. The calculation is carried out using both the relativistic configuration-interaction method and perturbation theory. The correction is shown to be important for the comparison of theory and experiment.
NASA Astrophysics Data System (ADS)
Abitew, T. A.; Roy, T.; Serrat-Capdevila, A.; van Griensven, A.; Bauwens, W.; Valdes, J. B.
2016-12-01
The Tekeze Basin supports one of Africans largest Arch Dam located in northern Ethiopian has vital role in hydropower generation. However, little has been done on the hydrology of the basin due to limited in situ hydroclimatological data. Therefore, the main objective of this research is to simulate streamflow upstream of the Tekeze Dam using Soil and Water Assessment Tool (SWAT) forced by bias-corrected multiple satellite rainfall products (CMORPH, TMPA and PERSIANN-CCS). This talk will present the potential as well as skills of bias-corrected satellite rainfall products for streamflow prediction in in Tropical Africa. Additionally, the SWAT model results will also be compared with previous conceptual Hydrological models (HyMOD and HBV) from SERVIR Streamflow forecasting in African Basin project (http://www.swaat.arizona.edu/index.html).
A new surgical technique for medial collateral ligament balancing: multiple needle puncturing.
Bellemans, Johan; Vandenneucker, Hilde; Van Lauwe, Johan; Victor, Jan
2010-10-01
In this article, we present our experience with a new technique for medial soft tissue balancing, where we make multiple punctures in the medial collateral ligament (MCL) using a 19-gauge needle, to progressively stretch the MCL until a correct ligament balance is achieved. Ligament status was evaluated both before and after the procedure using computer navigation and mediolateral stress testing. The procedure was considered successful when 2 to 4-mm mediolateral joint line opening was obtained in extension and 2 to 6 mm in flexion. In 34 of 35 cases, a progressive correction of medial tightness was achieved according to the above described criteria. One case was considered overreleased in extension. Needle puncturing is a new, effective, and safe technique for progressive correction of MCL tightness in the varus knee. Copyright © 2010 Elsevier Inc. All rights reserved.
Providing Counseling for Transgendered Inmates: A Survey of Correctional Services
ERIC Educational Resources Information Center
von Dresner, Kara Sandor; Underwood, Lee A.; Suarez, Elisabeth; Franklin, Timothy
2013-01-01
The purpose of this study was to survey the current assessment, housing, and mental health treatment needs of transsexual inmates within state correctional facilities. The literature reviewed epidemiology, prevalence, multiple uses of terms, assessment, and current standards of care. Along with the rise of the multicultural movement, growing…
Rylov, A I; Kravets, N S
2001-01-01
The experience of treatment of 69 injured persons with posttraumatic retroperitoneal hematoma suffering severe multiple combined abdominal trauma was analyzed. Application of the classification proposed permits to formulate diagnosis and to choose the tactic of treatment correctly. The intraoperative tactics algorithm was elaborated. It promotes the correct analysis of intraoperative findings and reduction of the diagnostic mistakes frequency as well. In the presence of vast defect, making impossible to suture over the parietal peritoneum, extraperitonization using cerebral dura mater was done. Operative intervention was concluded by drainage with subsequent laserotherapy.
NASA Technical Reports Server (NTRS)
Roman, N. G.; Warren, W. H., Jr.
1983-01-01
A revised and corrected version of the machine-readable catalog has been prepared. Cross identifications of the GC stars to the HD and DM catalogs have been replaced by data from the new SAO-HD-GC-DM Cross Index (Roman, Warren and Schofield 1983), including component identifications for multiple SAO entries having identical DM numbers in the SAO Catalog, supplemental Bonner Durchmusterung stars (lower case letter designations) and codes for multiple HD stars. Additional individual corrections have been incorporated based upon errors found during analyses of other catalogs.
Aethalometer multiple scattering correction Cref for mineral dust aerosols
NASA Astrophysics Data System (ADS)
Di Biagio, Claudia; Formenti, Paola; Cazaunau, Mathieu; Pangui, Edouard; Marchand, Nicolas; Doussin, Jean-François
2017-08-01
In this study we provide a first estimate of the Aethalometer multiple scattering correction Cref for mineral dust aerosols. Cref is an empirical constant used to correct the aerosol absorption coefficient measurements for the multiple scattering artefact of the Aethalometer; i.e. the filter fibres on which aerosols are deposited scatter light and this is miscounted as absorption. The Cref at 450 and 660 nm was obtained from the direct comparison of Aethalometer data (Magee Sci. AE31) with (i) the absorption coefficient calculated as the difference between the extinction and scattering coefficients measured by a Cavity Attenuated Phase Shift Extinction analyser (CAPS PMex) and a nephelometer respectively at 450 nm and (ii) the absorption coefficient from a MAAP (Multi-Angle Absorption Photometer) at 660 nm. Measurements were performed on seven dust aerosol samples generated in the laboratory by the mechanical shaking of natural parent soils issued from different source regions worldwide. The single scattering albedo (SSA) at 450 and 660 nm and the size distribution of the aerosols were also measured. Cref for mineral dust varies between 1.81 and 2.56 for a SSA of 0.85-0.96 at 450 nm and between 1.75 and 2.28 for a SSA of 0.98-0.99 at 660 nm. The calculated mean for dust is 2.09 (±0.22) at 450 nm and 1.92 (±0.17) at 660 nm. With this new Cref the dust absorption coefficient by the Aethalometer is about 2 % (450 nm) and 11 % (660 nm) higher than that obtained by using Cref = 2.14 at both 450 and 660 nm, as usually assumed in the literature. This difference induces a change of up to 3 % in the dust SSA at 660 nm. The Cref seems to be independent of the fine and coarse particle size fractions, and so the obtained Cref can be applied to dust both close to sources and following transport. Additional experiments performed with pure kaolinite minerals and polluted ambient aerosols indicate Cref of 2.49 (±0.02) and 2.32 (±0.01) at 450 and 660 nm (SSA = 0.96-0.97) for kaolinite, and Cref of 2.32 (±0.36) at 450 nm and 2.32 (±0.35) at 660 nm for pollution aerosols (SSA = 0.62-0.87 at 450 nm and 0.42-0.76 at 660 nm).
Modeling boundary measurements of scattered light using the corrected diffusion approximation
Lehtikangas, Ossi; Tarvainen, Tanja; Kim, Arnold D.
2012-01-01
We study the modeling and simulation of steady-state measurements of light scattered by a turbid medium taken at the boundary. In particular, we implement the recently introduced corrected diffusion approximation in two spatial dimensions to model these boundary measurements. This implementation uses expansions in plane wave solutions to compute boundary conditions and the additive boundary layer correction, and a finite element method to solve the diffusion equation. We show that this corrected diffusion approximation models boundary measurements substantially better than the standard diffusion approximation in comparison to numerical solutions of the radiative transport equation. PMID:22435102
Two-Dimensional Thermal Boundary Layer Corrections for Convective Heat Flux Gauges
NASA Technical Reports Server (NTRS)
Kandula, Max; Haddad, George
2007-01-01
This work presents a CFD (Computational Fluid Dynamics) study of two-dimensional thermal boundary layer correction factors for convective heat flux gauges mounted in flat plate subjected to a surface temperature discontinuity with variable properties taken into account. A two-equation k - omega turbulence model is considered. Results are obtained for a wide range of Mach numbers (1 to 5), gauge radius ratio, and wall temperature discontinuity. Comparisons are made for correction factors with constant properties and variable properties. It is shown that the variable-property effects on the heat flux correction factors become significant
Substance Use and HIV Prevention for Youth in Correctional Facilities
ERIC Educational Resources Information Center
Mouttapa, Michele; Watson, Donnie W.; McCuller, William J.; Reiber, Chris; Tsai, Winnie
2009-01-01
Evidence-based programs for substance use and HIV prevention (SUHIP) were adapted for high-risk juveniles detained at 24-hour secure correctional facilities. In this pilot study, comparisons were made between adolescents who received the SUHIP intervention and a control group on changes in: (1) knowledge of HIV prevention behaviors, (2) attitudes…
Perturbation corrections to Koopmans' theorem. V - A study with large basis sets
NASA Technical Reports Server (NTRS)
Chong, D. P.; Langhoff, S. R.
1982-01-01
The vertical ionization potentials of N2, F2 and H2O were calculated by perturbation corrections to Koopmans' theorem using six different basis sets. The largest set used includes several sets of polarization functions. Comparison is made with measured values and with results of computations using Green's functions.
A SAS(®) macro implementation of a multiple comparison post hoc test for a Kruskal-Wallis analysis.
Elliott, Alan C; Hynan, Linda S
2011-04-01
The Kruskal-Wallis (KW) nonparametric analysis of variance is often used instead of a standard one-way ANOVA when data are from a suspected non-normal population. The KW omnibus procedure tests for some differences between groups, but provides no specific post hoc pair wise comparisons. This paper provides a SAS(®) macro implementation of a multiple comparison test based on significant Kruskal-Wallis results from the SAS NPAR1WAY procedure. The implementation is designed for up to 20 groups at a user-specified alpha significance level. A Monte-Carlo simulation compared this nonparametric procedure to commonly used parametric multiple comparison tests. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Giudici, Mauro; Casabianca, Davide; Comunian, Alessandro
2015-04-01
The basic classical inverse problem of groundwater hydrology aims at determining aquifer transmissivity (T ) from measurements of hydraulic head (h), estimates or measures of source terms and with the least possible knowledge on hydraulic transmissivity. The theory of inverse problems shows that this is an example of ill-posed problem, for which non-uniqueness and instability (or at least ill-conditioning) might preclude the computation of a physically acceptable solution. One of the methods to reduce the problems with non-uniqueness, ill-conditioning and instability is a tomographic approach, i.e., the use of data corresponding to independent flow situations. The latter might correspond to different hydraulic stimulations of the aquifer, i.e., to different pumping schedules and flux rates. Three inverse methods have been analyzed and tested to profit from the use of multiple sets of data: the Differential System Method (DSM), the Comparison Model Method (CMM) and the Double Constraint Method (DCM). DSM and CMM need h all over the domain and thus the first step for their application is the interpolation of measurements of h at sparse points. Moreover, they also need the knowledge of the source terms (aquifer recharge, well pumping rates) all over the aquifer. DSM is intrinsically based on the use of multiple data sets, which permit to write a first-order partial differential equation for T , whereas CMM and DCM were originally proposed to invert a single data set and have been extended to work with multiple data sets in this work. CMM and DCM are based on Darcy's law, which is used to update an initial guess of the T field with formulas based on a comparison of different hydraulic gradients. In particular, the CMM algorithm corrects the T estimate with ratio of the observed hydraulic gradient and that obtained with a comparison model which shares the same boundary conditions and source terms as the model to be calibrated, but a tentative T field. On the other hand the DCM algorithm applies the ratio of the hydraulic gradients obtained for two different forward models, one with the same boundary conditions and source terms as the model to be calibrated and the other one with prescribed head at the positions where in- or out-flow is known and h is measured. For DCM and CMM, multiple stimulation is used by updating the T field separately for each data set and then combining the resulting updated fields with different possible statistics (arithmetic, geometric or harmonic mean, median, least change, etc.). The three algorithms are tested and their characteristics and results are compared with a field data set, which was provided by prof. Fritz Stauffer (ETH) and corresponding to a pumping test in a thin alluvial aquifer in northern Switzerland. Three data sets are available and correspond to the undisturbed state, to the flow field created by a single pumping well and to the situation created by an 'hydraulic dipole', i.e., an extraction and an injection wells. These data sets permit to test the three inverse methods and the different options which can be chosen for their use.
Optics Corrections with LOCO in the Fermilab Booster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Cheng-Yang; Prost, Lionel; Seiya, Kiyomi
2016-06-01
The optics of the Fermilab Booster has been corrected with LOCO (Linear Optics from Closed Orbits). However, the first corrections did not show any improvement in capture efficiency at injection. A detailed analysis of the results showed that the problem lay in the MADX optics file. Both the quadrupole and chromatic strengths were originally set as constants independent of beam energy. However, careful comparison between the measured and calculated tunes and chromatcity show that these strengths are energy dependent. After the MADX model was modified with these new energy dependent strengths, the LOCO corrected lattice has been applied to Booster.more » The effect of the corrected lattice will be discussed here.« less
Quantum Corrections to the 'Atomistic' MOSFET Simulations
NASA Technical Reports Server (NTRS)
Asenov, Asen; Slavcheva, G.; Kaya, S.; Balasubramaniam, R.
2000-01-01
We have introduced in a simple and efficient manner quantum mechanical corrections in our 3D 'atomistic' MOSFET simulator using the density gradient formalism. We have studied in comparison with classical simulations the effect of the quantum mechanical corrections on the simulation of random dopant induced threshold voltage fluctuations, the effect of the single charge trapping on interface states and the effect of the oxide thickness fluctuations in decanano MOSFETs with ultrathin gate oxides. The introduction of quantum corrections enhances the threshold voltage fluctuations but does not affect significantly the amplitude of the random telegraph noise associated with single carrier trapping. The importance of the quantum corrections for proper simulation of oxide thickness fluctuation effects has also been demonstrated.
Corrections to the Eckhaus' stability criterion for one-dimensional stationary structures
NASA Astrophysics Data System (ADS)
Malomed, B. A.; Staroselsky, I. E.; Konstantinov, A. B.
1989-01-01
Two amendments to the well-known Eckhaus' stability criterion for small-amplitude non-linear structures generated by weak instability of a spatially uniform state of a non-equilibrium one-dimensional system against small perturbations with finite wavelengths are obtained. Firstly, we evaluate small corrections to the main Eckhaus' term which, on the contrary so that term, do not have a universal form. Comparison of those non-universal corrections with experimental or numerical results gives a possibility to select a more relevant form of an effective nonlinear evolution equation. In particular, the comparison with such results for convective rolls and Taylor vortices gives arguments in favor of the Swift-Hohenberg equation. Secondly, we derive an analog of the Eckhaus criterion for systems degenerate in the sense that in an expansion of their non-linear parts in powers of dynamical variables, the second and third degree terms are absent.
Beard, Brian B; Kainz, Wolfgang
2004-10-13
We reviewed articles using computational RF dosimetry to compare the Specific Anthropomorphic Mannequin (SAM) to anatomically correct models of the human head. Published conclusions based on such comparisons have varied widely. We looked for reasons that might cause apparently similar comparisons to produce dissimilar results. We also looked at the information needed to adequately compare the results of computational RF dosimetry studies. We concluded studies were not comparable because of differences in definitions, models, and methodology. Therefore we propose a protocol, developed by an IEEE standards group, as an initial step in alleviating this problem. The protocol calls for a benchmark validation study comparing the SAM phantom to two anatomically correct models of the human head. It also establishes common definitions and reporting requirements that will increase the comparability of all computational RF dosimetry studies of the human head.
Beard, Brian B; Kainz, Wolfgang
2004-01-01
We reviewed articles using computational RF dosimetry to compare the Specific Anthropomorphic Mannequin (SAM) to anatomically correct models of the human head. Published conclusions based on such comparisons have varied widely. We looked for reasons that might cause apparently similar comparisons to produce dissimilar results. We also looked at the information needed to adequately compare the results of computational RF dosimetry studies. We concluded studies were not comparable because of differences in definitions, models, and methodology. Therefore we propose a protocol, developed by an IEEE standards group, as an initial step in alleviating this problem. The protocol calls for a benchmark validation study comparing the SAM phantom to two anatomically correct models of the human head. It also establishes common definitions and reporting requirements that will increase the comparability of all computational RF dosimetry studies of the human head. PMID:15482601
Injuries in martial arts: a comparison of five styles
Zetaruk, M; Violan, M; Zurakowski, D; Micheli, L
2005-01-01
Objective: To compare five martial arts with respect to injury outcomes. Methods: A one year retrospective cohort was studied using an injury survey. Data on 263 martial arts participants (Shotokan karate, n = 114; aikido, n = 47; tae kwon do, n = 49; kung fu, n = 39; tai chi, n = 14) were analysed. Predictor variables included age, sex, training frequency (⩽3 h/week v >3 h/week), experience (<3 years v ⩾3 years), and martial art style. Outcome measures were injuries requiring time off from training, major injuries (⩾7 days off), multiple injuries (⩾3), body region, and type of injury. Logistic regression was used to determine odds ratios (OR) and confidence intervals (CI). Fisher's exact test was used for comparisons between styles, with a Bonferroni correction for multiple comparisons. Results: The rate of injuries, expressed as percentage of participants sustaining an injury that required time off training a year, varied according to style: 59% tae kwon do, 51% aikido, 38% kung fu, 30% karate, and 14% tai chi. There was a threefold increased risk of injury and multiple injury in tae kwon do than karate (p<0.001). Subjects ⩾18 years of age were at greater risk of injury than younger ones (p<0.05; OR 3.95; CI 1.48 to 9.52). Martial artists with at least three years experience were twice as likely to sustain injury than less experienced students (p<0.005; OR 2.46; CI 1.51 to 4.02). Training >3 h/week was also a significant predictor of injury (p<0.05; OR 1.85; CI 1.13 to 3.05). Compared with karate, the risks of head/neck injury, upper extremity injury, and soft tissue injury were all higher in aikido (p<0.005), and the risks of head/neck, groin, and upper and lower extremity injuries were higher in tae kwon do (p<0.001). No sex differences were found for any of the outcomes studied. Conclusions: There is a higher rate of injury in tae kwon do than Shotokan karate. Different martial arts have significantly different types and distribution of injuries. Martial arts appear to be safe for young athletes, particularly those at beginner or intermediate levels. PMID:15618336
CANDELS Visual Classifications: Scheme, Data Release, and First Results
NASA Astrophysics Data System (ADS)
Kartaltepe, Jeyhan S.; Mozena, Mark; Kocevski, Dale; McIntosh, Daniel H.; Lotz, Jennifer; Bell, Eric F.; Faber, Sandy; Ferguson, Harry; Koo, David; Bassett, Robert; Bernyk, Maksym; Blancato, Kirsten; Bournaud, Frederic; Cassata, Paolo; Castellano, Marco; Cheung, Edmond; Conselice, Christopher J.; Croton, Darren; Dahlen, Tomas; de Mello, Duilia F.; DeGroot, Laura; Donley, Jennifer; Guedes, Javiera; Grogin, Norman; Hathi, Nimish; Hilton, Matt; Hollon, Brett; Koekemoer, Anton; Liu, Nick; Lucas, Ray A.; Martig, Marie; McGrath, Elizabeth; McPartland, Conor; Mobasher, Bahram; Morlock, Alice; O'Leary, Erin; Peth, Mike; Pforr, Janine; Pillepich, Annalisa; Rosario, David; Soto, Emmaris; Straughn, Amber; Telford, Olivia; Sunnquist, Ben; Trump, Jonathan; Weiner, Benjamin; Wuyts, Stijn; Inami, Hanae; Kassin, Susan; Lani, Caterina; Poole, Gregory B.; Rizer, Zachary
2015-11-01
We have undertaken an ambitious program to visually classify all galaxies in the five CANDELS fields down to H < 24.5 involving the dedicated efforts of over 65 individual classifiers. Once completed, we expect to have detailed morphological classifications for over 50,000 galaxies spanning 0 < z < 4 over all the fields, with classifications from 3 to 5 independent classifiers for each galaxy. Here, we present our detailed visual classification scheme, which was designed to cover a wide range of CANDELS science goals. This scheme includes the basic Hubble sequence types, but also includes a detailed look at mergers and interactions, the clumpiness of galaxies, k-corrections, and a variety of other structural properties. In this paper, we focus on the first field to be completed—GOODS-S, which has been classified at various depths. The wide area coverage spanning the full field (wide+deep+ERS) includes 7634 galaxies that have been classified by at least three different people. In the deep area of the field, 2534 galaxies have been classified by at least five different people at three different depths. With this paper, we release to the public all of the visual classifications in GOODS-S along with the Perl/Tk GUI that we developed to classify galaxies. We present our initial results here, including an analysis of our internal consistency and comparisons among multiple classifiers as well as a comparison to the Sérsic index. We find that the level of agreement among classifiers is quite good (>70% across the full magnitude range) and depends on both the galaxy magnitude and the galaxy type, with disks showing the highest level of agreement (>50%) and irregulars the lowest (<10%). A comparison of our classifications with the Sérsic index and rest-frame colors shows a clear separation between disk and spheroid populations. Finally, we explore morphological k-corrections between the V-band and H-band observations and find that a small fraction (84 galaxies in total) are classified as being very different between these two bands. These galaxies typically have very clumpy and extended morphology or are very faint in the V-band.
Brooks, Christine; Vickers, Amy Manning; Aryal, Subhash
2013-04-01
The objective of this study was to compare the differences in lipid loss from 24 samples of banked donor human milk (DHM) among 3 feeding methods: DHM given by syringe pump over 1 hour, 2 hours, and by bolus/gravity gavage. Comparative, descriptive. There were no human subjects. Twenty-four samples of 8 oz of DHM were divided into four 60-mL aliquots. Timed feedings were given by Medfusion 2001 syringe pumps with syringes connected to narrow-lumened extension sets designed for enteral feedings and connected to standard silastic enteral feeding tubes. Gravity feedings were given using the identical syringes connected to the same silastic feeding tubes. All aliquots were analyzed with the York Dairy Analyzer. Univariate repeated-measures analyses of variance were used for the omnibus testing for overall differences between the feeding methods. Lipid content expressed as grams per deciliter at the end of each feeding method was compared with the prefed control samples using the Dunnett's test. The Tukey correction was used for other pairwise multiple comparisons. The univariate repeated-measures analysis of variance conducted to test for overall differences between feeding methods showed a significant difference between the methods (F = 58.57, df = 3, 69, P < .0001). Post hoc analysis using the Dunnett's approach revealed that there was a significant difference in fat content between the control sample and the 1-hour and 2-hours feeding methods (P < .0001), but we did not find any significant difference in fat content between the control and the gravity feeding methods (P = .3296). Pairwise comparison using the Tukey correction revealed a significant difference between both gravity and 1-hour feeding methods (P < .0001), and gravity and 2-hour feeding method (P < .0001). There was no significant difference in lipid content between the 1-hour and 2-hour feeding methods (P = .2729). Unlike gravity feedings, the timed feedings resulted in a statistically significant loss of fat as compared with their controls. These findings should raise questions about how those infants in the neonatal intensive care unit are routinely gavage fed.
A comparison of somatic mutational spectra in healthy study populations from Russia, Sweden and USA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noori, P; Hou, S; Jones, I M
Comparison of mutation spectra at the hypoxanthine-phosphoribosyl transferase (HPRT) gene of peripheral blood T lymphocytes may provide insight into the aetiology of somatic mutation contributing to carcinogenesis and other diseases. To increase knowledge of mutation spectra in healthy people, we have analyzed HPRT mutant T-cells of 50 healthy Russians originally recruited as controls for a study of Chernobyl clean-up workers (Jones et al. Radiation Res. 158, 2002, 424). Reverse transcriptase polymerase chain reactions and DNA sequencing identified 161 independent mutations among 176 thioguanine resistant mutants. Forty (40) mutations affected splicing mechanisms and 27 deletions or insertions of 1 to 60more » nucleotides were identified. Ninety four (94) single base substitutions were identified, including 62 different mutations at 55 different nucleotide positions, of which 19 had not previously been reported in human T-cells. Comparison of this base substitution spectrum with mutation spectra in a USA (Burkhart-Schultz et al. Carcinogenesis 17, 1996, 1871) and two Swedish populations (Podlutsky et al, Carcinogenesis 19, 1998, 557, Podlutsky et al. Mutation Res. 431, 1999, 325) revealed similarity in the type, frequency and distribution of mutations in the four spectra, consistent with aetiologies inherent in human metabolism. There were 15-19 identical mutations in the three pair-wise comparisons of Russian with USA and Swedish spectra. Intriguingly, there were 21 mutations unique to the Russian spectrum, and comparison by the Monte Carlo method of Adams and Skopek (J. Mol. Biol. 194, 1987, 391) indicated that the Russian spectrum was different from both Swedish spectra (P=0.007, 0.002) but not different from the USA spectrum (P=0.07), when Bonferroni correction for multiple comparisons was made (p < 0.008 required for significance). Age and smoking did not account for these differences. Other factors causing mutational differences need to be explored.« less
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Mcclain, Charles R.; Comiso, Josefino C.; Fraser, Robert S.; Firestone, James K.; Schieber, Brian D.; Yeh, Eueng-Nan; Arrigo, Kevin R.; Sullivan, Cornelius W.
1994-01-01
Although the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Calibration and Validation Program relies on the scientific community for the collection of bio-optical and atmospheric correction data as well as for algorithm development, it does have the responsibility for evaluating and comparing the algorithms and for ensuring that the algorithms are properly implemented within the SeaWiFS Data Processing System. This report consists of a series of sensitivity and algorithm (bio-optical, atmospheric correction, and quality control) studies based on Coastal Zone Color Scanner (CZCS) and historical ancillary data undertaken to assist in the development of SeaWiFS specific applications needed for the proper execution of that responsibility. The topics presented are as follows: (1) CZCS bio-optical algorithm comparison, (2) SeaWiFS ozone data analysis study, (3) SeaWiFS pressure and oxygen absorption study, (4) pixel-by-pixel pressure and ozone correction study for ocean color imagery, (5) CZCS overlapping scenes study, (6) a comparison of CZCS and in situ pigment concentrations in the Southern Ocean, (7) the generation of ancillary data climatologies, (8) CZCS sensor ringing mask comparison, and (9) sun glint flag sensitivity study.
[Work with visual demands. Assumption of responsibility for optical correction by the employer].
Hermans, G
2004-01-01
Comparison of visual demands of work in a traditional office to those of work in an office equiped with a screen. Description of problems of vision when focusing the eye to various distances and fixing it in various directions. Range of possibilities for optical correction for work with a screen (monofocal, bifocal, progressive or for reading), specifying among the optical corrections those which are exclusively reserved for this activity and should become the employer's responsibility.
Liu, N; Li, X-W; Zhou, M-W; Biering-Sørensen, F
2015-08-01
This is an interventional training session. The objective of this study was to investigate the difference in response to self-assessment questions in the original and an adjusted version for a submodule of www.elearnSCI.org for student nurses. The study was conducted in a teaching hospital affiliated to Peking University, China. In all, 28 student nurses divided into two groups (groups A and B; 14 in each) received a print-out of a Chinese translation of the slides from the 'Maintaining skin integrity following spinal cord injury' submodule in www.elearnSCI.org for self-study. Both groups were then tested using the 10 self-assessment multiple-choice questions (MCQs) related to the same submodule. Group A used the original questions, whereas group B received an adjusted questionnaire. The responses to four conventional single-answer MCQs were nearly all correct in both groups. However, in three questions, group A, with the option 'All of the above', had a higher number of correct answers than group B, with multiple-answer MCQs. In addition, in another three questions, group A, using the original multiple-answer MCQs, had fewer correct answers than group B, where it was only necessary to tick a single incorrect answer. Variations in design influence the response to questions. The use of conventional single-answer MCQs should be reconsidered, as they only examine the recall of isolated knowledge facts. The 'All of the above' option should be avoided because it would increase the number of correct answers arrived at by guessing. When using multiple-answer MCQs, it is recommended that the questions asked should be in accordance with the content within the www.elearnSCI.org.
Abnormal hippocampal shape in offenders with psychopathy.
Boccardi, Marina; Ganzola, Rossana; Rossi, Roberta; Sabattoli, Francesca; Laakso, Mikko P; Repo-Tiihonen, Eila; Vaurio, Olli; Könönen, Mervi; Aronen, Hannu J; Thompson, Paul M; Frisoni, Giovanni B; Tiihonen, Jari
2010-03-01
Posterior hippocampal volumes correlate negatively with the severity of psychopathy, but local morphological features are unknown. The aim of this study was to investigate hippocampal morphology in habitually violent offenders having psychopathy. Manual tracings of hippocampi from magnetic resonance images of 26 offenders (age: 32.5 +/- 8.4), with different degrees of psychopathy (12 high, 14 medium psychopathy based on the Psychopathy Checklist Revised), and 25 healthy controls (age: 34.6 +/- 10.8) were used for statistical modelling of local changes with a surface-based radial distance mapping method. Both offenders and controls had similar hippocampal volume and asymmetry ratios. Local analysis showed that the high psychopathy group had a significant depression along the longitudinal hippocampal axis, on both the dorsal and ventral aspects, when compared with the healthy controls and the medium psychopathy group. The opposite comparison revealed abnormal enlargement of the lateral borders in both the right and left hippocampi of both high and medium psychopathy groups versus controls, throughout CA1, CA2-3 and the subicular regions. These enlargement and reduction effects survived statistical correction for multiple comparisons in the main contrast (26 offenders vs. 25 controls) and in most subgroup comparisons. A statistical check excluded a possible confounding effect from amphetamine and polysubstance abuse. These results indicate that habitually violent offenders exhibit a specific abnormal hippocampal morphology, in the absence of total gray matter volume changes, that may relate to different autonomic modulation and abnormal fear-conditioning. 2009 Wiley-Liss, Inc.
ERIC Educational Resources Information Center
Codding, Robin S.; Archer, Jillian; Connell, James
2010-01-01
The purpose of this study was to replicate and extend a previous study by Burns ("Education and Treatment of Children" 28: 237-249, 2005) examining the effectiveness of incremental rehearsal on computation performance. A multiple-probe design across multiplication problem sets was employed for one participant to examine digits correct per minute…
Reichardt, J; Hess, M; Macke, A
2000-04-20
Multiple-scattering correction factors for cirrus particle extinction coefficients measured with Raman and high spectral resolution lidars are calculated with a radiative-transfer model. Cirrus particle-ensemble phase functions are computed from single-crystal phase functions derived in a geometrical-optics approximation. Seven crystal types are considered. In cirrus clouds with height-independent particle extinction coefficients the general pattern of the multiple-scattering parameters has a steep onset at cloud base with values of 0.5-0.7 followed by a gradual and monotonic decrease to 0.1-0.2 at cloud top. The larger the scattering particles are, the more gradual is the rate of decrease. Multiple-scattering parameters of complex crystals and of imperfect hexagonal columns and plates can be well approximated by those of projected-area equivalent ice spheres, whereas perfect hexagonal crystals show values as much as 70% higher than those of spheres. The dependencies of the multiple-scattering parameters on cirrus particle spectrum, base height, and geometric depth and on the lidar parameters laser wavelength and receiver field of view, are discussed, and a set of multiple-scattering parameter profiles for the correction of extinction measurements in homogeneous cirrus is provided.
Error correcting circuit design with carbon nanotube field effect transistors
NASA Astrophysics Data System (ADS)
Liu, Xiaoqiang; Cai, Li; Yang, Xiaokuo; Liu, Baojun; Liu, Zhongyong
2018-03-01
In this work, a parallel error correcting circuit based on (7, 4) Hamming code is designed and implemented with carbon nanotube field effect transistors, and its function is validated by simulation in HSpice with the Stanford model. A grouping method which is able to correct multiple bit errors in 16-bit and 32-bit application is proposed, and its error correction capability is analyzed. Performance of circuits implemented with CNTFETs and traditional MOSFETs respectively is also compared, and the former shows a 34.4% decrement of layout area and a 56.9% decrement of power consumption.
NASA Astrophysics Data System (ADS)
Saitoh, N.; Hatta, H.; Imasu, R.; Shiomi, K.; Kuze, A.; Niwa, Y.; Machida, T.; Sawa, Y.; Matsueda, H.
2016-12-01
Thermal and Near Infrared Sensor for Carbon Observation (TANSO)-Fourier Transform Spectrometer (FTS) on board the Greenhouse Gases Observing Satellite (GOSAT) has been observing carbon dioxide (CO2) concentrations in several atmospheric layers in the thermal infrared (TIR) band since its launch on 23 January 2009. We have compared TANSO-FTS TIR Version 1 (V1) CO2 data from 2010 to 2012 and CO2 data obtained by the Continuous CO2 Measuring Equipment (CME) installed on several JAL aircraft in the framework of the Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) project to evaluate bias in the TIR CO2 data in the lower and middle troposphere. Here, we have regarded the CME data obtained during the ascent and descent flights over several airports as part of CO2 vertical profiles there. The comparisons showed that the TIR V1 CO2 data had a negative bias against the CME CO2 data; the magnitude of the bias varied depending on season and latitude. We have estimated bias correction values for the TIR V1 lower and middle tropospheric CO2 data in each latitude band from 40°S to 60°N in each season on the basis of the comparisons with the CME CO2 profiles in limited areas over airports, applied the bias correction values to the TIR V1 CO2 data, and evaluated the quality of the bias-corrected TIR CO2 data globally through comparisons with CO2 data taken from the Nonhydrostatic Icosahedral Atmospheric Model (NICAM)-based Transport Model (TM). The bias-corrected TIR CO2 data showed a better agreement with the NICAM-TM CO2 than the original TIR data, which suggests that the bias correction values estimated in the limited areas are basically applicable to global TIR CO2 data. We have compared XCO2 data calculated from both the original and bias-corrected TIR CO2 data with TANSO-FTS SWIR and NICAM-TM XCO2 data; both the TIR XCO2 data agreed with SWIR and NICAM-TM XCO2 data within 1% except over the Sahara desert and strong source and sink regions.
Toledo, Estefanía; Wang, Dong D; Ruiz-Canela, Miguel; Clish, Clary B; Razquin, Cristina; Zheng, Yan; Guasch-Ferré, Marta; Hruby, Adela; Corella, Dolores; Gómez-Gracia, Enrique; Fiol, Miquel; Estruch, Ramón; Ros, Emilio; Lapetra, José; Fito, Montserrat; Aros, Fernando; Serra-Majem, Luis; Liang, Liming; Salas-Salvadó, Jordi; Hu, Frank B; Martínez-González, Miguel A
2017-10-01
Background: Lipid metabolites may partially explain the inverse association between the Mediterranean diet (MedDiet) and cardiovascular disease (CVD). Objective: We evaluated the associations between 1 ) lipid species and the risk of CVD (myocardial infarction, stroke, or cardiovascular death); 2 ) a MedDiet intervention [supplemented with extra virgin olive oil (EVOO) or nuts] and 1-y changes in these molecules; and 3 ) 1-y changes in lipid species and subsequent CVD. Design: With the use of a case-cohort design, we profiled 202 lipid species at baseline and after 1 y of intervention in the PREDIMED (PREvención con DIeta MEDiterránea) trial in 983 participants [230 cases and a random subcohort of 790 participants (37 overlapping cases)]. Results: Baseline concentrations of cholesterol esters (CEs) were inversely associated with CVD. A shorter chain length and higher saturation of some lipids were directly associated with CVD. After adjusting for multiple testing, direct associations remained significant for 20 lipids, and inverse associations remained significant for 6 lipids. When lipid species were weighted by the number of carbon atoms and double bonds, the strongest inverse association was found for CEs [HR: 0.39 (95% CI: 0.22, 0.68)] between extreme quintiles ( P -trend = 0.002). Participants in the MedDiet + EVOO and MedDiet + nut groups experienced significant ( P < 0.05) 1-y changes in 20 and 17 lipids, respectively, compared with the control group. Of these changes, only those in CE(20:3) in the MedDiet + nuts group remained significant after correcting for multiple testing. None of the 1-y changes was significantly associated with CVD risk after correcting for multiple comparisons. Conclusions: Although the MedDiet interventions induced some significant 1-y changes in the lipidome, they were not significantly associated with subsequent CVD risk. Lipid metabolites with a longer acyl chain and higher number of double bonds at baseline were significantly and inversely associated with the risk of CVD. © 2017 American Society for Nutrition.
Müller-Lutz, Anja; Ljimani, Alexandra; Stabinska, Julia; Zaiss, Moritz; Boos, Johannes; Wittsack, Hans-Jörg; Schleich, Christoph
2018-05-14
The study compares glycosaminoglycan chemical exchange saturation transfer (gagCEST) imaging of intervertebral discs corrected for solely B 0 inhomogeneities or both B 0 and B 1 inhomogeneities. Lumbar intervertebral discs of 20 volunteers were examined with T 2 -weighted and gagCEST imaging. Field inhomogeneity correction was performed with B 0 correction only and with correction of both B 0 and B 1 . GagCEST effects measured by the asymmetric magnetization transfer ratio (MTR asym ) and signal-to-noise ratio (SNR) were compared between both methods. Significant higher MTR asym and SNR values were obtained in the nucleus pulposus using B 0 and B 1 correction compared with B 0 -corrected gagCEST. The GagCEST effect was significantly different in the nucleus pulposus compared with the annulus fibrosus for both methods. The B 0 and B 1 field inhomogeneity correction method leads to an improved quality of gagCEST imaging in IVDs compared with only B 0 correction.
Baila-Rueda, Lucía; Lamiquiz-Moneo, Itziar; Jarauta, Estíbaliz; Mateo-Gallego, Rocío; Perez-Calahorra, Sofía; Marco-Benedí, Victoria; Bea, Ana M; Cenarro, Ana; Civeira, Fernando
2018-01-15
Familial hypercholesterolemia (FH) is a genetic disorder that result in abnormally high low-density lipoprotein cholesterol levels, markedly increased risk of coronary heart disease (CHD) and tendon xanthomas (TX). However, the clinical expression is highly variable. TX are present in other metabolic diseases that associate increased sterol concentration. If non-cholesterol sterols are involved in the development of TX in FH has not been analyzed. Clinical and biochemical characteristics, non-cholesterol sterols concentrations and Aquilles tendon thickness were determined in subjects with genetic FH with (n = 63) and without (n = 40) TX. Student-t test o Mann-Whitney test were used accordingly. Categorical variables were compared using a Chi square test. ANOVA and Kruskal-Wallis tests were performed to multiple independent variables comparison. Post hoc adjusted comparisons were performed with Bonferroni correction when applicable. Correlations of parameters in selected groups were calculated applying the non-parametric Spearman correlation procedure. To identify variables associated with Achilles tendon thickness changes, multiple linear regression were applied. Patients with TX presented higher concentrations of non-cholesterol sterols in plasma than patients without xanthomas (P = 0.006 and 0.034, respectively). Furthermore, there was a significant association between 5α-cholestanol, β-sitosterol, desmosterol, 24S-hydroxycholesterol and 27-hydroxycholesterol concentrations and Achilles tendon thickness (p = 0.002, 0.012, 0.020, 0.045 and 0.040, respectively). Our results indicate that non-cholesterol sterol concentrations are associated with the presence of TX. Since cholesterol and non-cholesterol sterols are present in the same lipoproteins, further studies would be needed to elucidate their potential role in the development of TX.
Hallberg, Pär; Nagy, Julia; Karawajczyk, Malgorzata; Nordang, Leif; Islander, Gunilla; Norling, Pia; Johansson, Hans-Erik; Kämpe, Mary; Hugosson, Svante; Yue, Qun-Ying; Wadelius, Mia
2017-04-01
Angioedema is a rare and serious adverse drug reaction (ADR) to angiotensin-converting enzyme (ACE) inhibitor treatment. Dry cough is a common side effect of ACE inhibitors and has been identified as a possible risk factor for angioedema. We compared characteristics between patients with ACE inhibitor-induced angioedema and cough with the aim of identifying risk factors that differ between these adverse events. Data on patients with angioedema or cough induced by ACE inhibitors were collected from the Swedish database of spontaneously reported ADRs or from collaborating clinicians. Wilcoxon rank sum test, Fisher's exact test, and odds ratios (ORs) with 95% CIs were used to test for between-group differences. The significance threshold was set to P <0.00128 to correct for multiple comparisons. Clinical characteristics were compared between 168 patients with angioedema and 121 with cough only. Smoking and concomitant selective calcium channel blocker treatment were more frequent among patients with angioedema than cough: OR = 4.3, 95% CI = 2.1-8.9, P = 2.2 × 10 -5 , and OR = 3.7, 95% CI = 2.0-7.0, P = 1.7 × 10 -5 . Angioedema cases were seen more often in male patients (OR = 2.2, 95% CI = 1.4-3.6, P = 1.3 × 10 -4 ) and had longer time to onset and higher doses than those with cough ( P = 3.2 × 10 -10 and P = 2.6 × 10 -4 ). A multiple model containing the variables smoking, concurrent calcium channel blocker treatment, male sex, and time to onset accounted for 26% of the variance between the groups. Smoking, comedication with selective calcium channel blockers, male sex, and longer treatment time were associated with ACE inhibitor-induced angioedema rather than cough.
78 FR 60679 - Airworthiness Directives; The Boeing Company Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-02
... Company Model 717-200 airplanes. This AD was prompted by multiple reports of cracks of overwing frames. This AD requires repetitive inspections for cracking of the overwing frames, and corrective actions if necessary. We are issuing this AD to detect and correct such cracking that could sever a frame, which may...
The purpose of this paper is to provide guidelines for sub-slab sampling using dedicated vapor probes. Use of dedicated vapor probes allows for multiple sample events before and after corrective action and for vacuum testing to enhance the design and monitoring of a corrective m...
Comment on 3PL IRT Adjustment for Guessing
ERIC Educational Resources Information Center
Chiu, Ting-Wei; Camilli, Gregory
2013-01-01
Guessing behavior is an issue discussed widely with regard to multiple choice tests. Its primary effect is on number-correct scores for examinees at lower levels of proficiency. This is a systematic error or bias, which increases observed test scores. Guessing also can inflate random error variance. Correction or adjustment for guessing formulas…
ERIC Educational Resources Information Center
Cusick, Gretchen Ruth; Goerge, Robert M.; Bell, Katie Claussen
2009-01-01
This Chapin Hall report describes findings on the extent of system involvement among Illinois youth released from correctional facilities, tracking a population of youth under age 18 in Illinois following their release. Using administrative records, researchers develop profiles of reentry experiences across the many systems that serve youth and…
NASA Astrophysics Data System (ADS)
Antoine, David; Morel, Andre
1997-02-01
An algorithm is proposed for the atmospheric correction of the ocean color observations by the MERIS instrument. The principle of the algorithm, which accounts for all multiple scattering effects, is presented. The algorithm is then teste, and its accuracy assessed in terms of errors in the retrieved marine reflectances.
The Core: Teaching Your Child the Foundations of Classical Education
ERIC Educational Resources Information Center
Bortins, Leigh A.
2010-01-01
In the past, correct spelling, the multiplication tables, the names of the state capitals and the American presidents were basics that all children were taught in school. Today, many children graduate without this essential knowledge. Most curricula today follow a haphazard sampling of topics with a focus on political correctness instead of…
Confidence-Based Assessments within an Adult Learning Environment
ERIC Educational Resources Information Center
Novacek, Paul
2013-01-01
Traditional knowledge assessments rely on multiple-choice type questions that only report a right or wrong answer. The reliance within the education system on this technique infers that a student who provides a correct answer purely through guesswork possesses knowledge equivalent to a student who actually knows the correct answer. A more complete…
The Effect and Implications of a "Self-Correcting" Assessment Procedure
ERIC Educational Resources Information Center
Francis, Alisha L.; Barnett, Jerrold
2012-01-01
We investigated Montepare's (2005, 2007) self-correcting procedure for multiple-choice exams. Findings related to memory suggest this procedure should lead to improved retention by encouraging students to distribute the time spent reviewing the material. Results from a general psychology class (n = 98) indicate that the benefits are not as…
Adderson, Elisabeth E.; Boudreaux, Jan W.; Cummings, Jessica R.; Pounds, Stanley; Wilson, Deborah A.; Procop, Gary W.; Hayden, Randall T.
2008-01-01
We compared the relative levels of effectiveness of three commercial identification kits and three nucleic acid amplification tests for the identification of coryneform bacteria by testing 50 diverse isolates, including 12 well-characterized control strains and 38 organisms obtained from pediatric oncology patients at our institution. Between 33.3 and 75.0% of control strains were correctly identified to the species level by phenotypic systems or nucleic acid amplification assays. The most sensitive tests were the API Coryne system and amplification and sequencing of the 16S rRNA gene using primers optimized for coryneform bacteria, which correctly identified 9 of 12 control isolates to the species level, and all strains with a high-confidence call were correctly identified. Organisms not correctly identified were species not included in the test kit databases or not producing a pattern of reactions included in kit databases or which could not be differentiated among several genospecies based on reaction patterns. Nucleic acid amplification assays had limited abilities to identify some bacteria to the species level, and comparison of sequence homologies was complicated by the inclusion of allele sequences obtained from uncultivated and uncharacterized strains in databases. The utility of rpoB genotyping was limited by the small number of representative gene sequences that are currently available for comparison. The correlation between identifications produced by different classification systems was poor, particularly for clinical isolates. PMID:18160450
NASA Astrophysics Data System (ADS)
Gatzsche, Kathrin; Babel, Wolfgang; Falge, Eva; Pyles, Rex David; Tha Paw U, Kyaw; Raabe, Armin; Foken, Thomas
2018-05-01
The ACASA (Advanced Canopy-Atmosphere-Soil Algorithm) model, with a higher-order closure for tall vegetation, has already been successfully tested and validated for homogeneous spruce forests. The aim of this paper is to test the model using a footprint-weighted tile approach for a clearing with a heterogeneous structure of the underlying surface. The comparison with flux data shows a good agreement with a footprint-aggregated tile approach of the model. However, the results of a comparison with a tile approach on the basis of the mean land use classification of the clearing is not significantly different. It is assumed that the footprint model is not accurate enough to separate small-scale heterogeneities. All measured fluxes are corrected by forcing the energy balance closure of the test data either by maintaining the measured Bowen ratio or by the attribution of the residual depending on the fractions of sensible and latent heat flux to the buoyancy flux. The comparison with the model, in which the energy balance is closed, shows that the buoyancy correction for Bowen ratios > 1.5 better fits the measured data. For lower Bowen ratios, the correction probably lies between the two methods, but the amount of available data was too small to make a conclusion. With an assumption of similarity between water and carbon dioxide fluxes, no correction of the net ecosystem exchange is necessary for Bowen ratios > 1.5.
Joseph, Agnel Praveen; Srinivasan, Narayanaswamy; de Brevern, Alexandre G
2012-09-01
Comparison of multiple protein structures has a broad range of applications in the analysis of protein structure, function and evolution. Multiple structure alignment tools (MSTAs) are necessary to obtain a simultaneous comparison of a family of related folds. In this study, we have developed a method for multiple structure comparison largely based on sequence alignment techniques. A widely used Structural Alphabet named Protein Blocks (PBs) was used to transform the information on 3D protein backbone conformation as a 1D sequence string. A progressive alignment strategy similar to CLUSTALW was adopted for multiple PB sequence alignment (mulPBA). Highly similar stretches identified by the pairwise alignments are given higher weights during the alignment. The residue equivalences from PB based alignments are used to obtain a three dimensional fit of the structures followed by an iterative refinement of the structural superposition. Systematic comparisons using benchmark datasets of MSTAs underlines that the alignment quality is better than MULTIPROT, MUSTANG and the alignments in HOMSTRAD, in more than 85% of the cases. Comparison with other rigid-body and flexible MSTAs also indicate that mulPBA alignments are superior to most of the rigid-body MSTAs and highly comparable to the flexible alignment methods. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
Kameda, Hiroyuki; Kudo, Kohsuke; Matsuda, Tsuyoshi; Harada, Taisuke; Iwadate, Yuji; Uwano, Ikuko; Yamashita, Fumio; Yoshioka, Kunihiro; Sasaki, Makoto; Shirato, Hiroki
2017-12-04
Respiration-induced phase shift affects B 0 /B 1 + mapping repeatability in parallel transmission (pTx) calibration for 7T brain MRI, but is improved by breath-holding (BH). However, BH cannot be applied during long scans. To examine whether interleaved acquisition during calibration scanning could improve pTx repeatability and image homogeneity. Prospective. Nine healthy subjects. 7T MRI with a two-channel RF transmission system was used. Calibration scanning for B 0 /B 1 + mapping was performed under sequential acquisition/free-breathing (Seq-FB), Seq-BH, and interleaved acquisition/FB (Int-FB) conditions. The B 0 map was calculated with two echo times, and the B 1 + map was obtained using the Bloch-Siegert method. Actual flip-angle imaging (AFI) and gradient echo (GRE) imaging were performed using pTx and quadrature-Tx (qTx). All scans were acquired in five sessions. Repeatability was evaluated using intersession standard deviation (SD) or coefficient of variance (CV), and in-plane homogeneity was evaluated using in-plane CV. A paired t-test with Bonferroni correction for multiple comparisons was used. The intersession CV/SDs for the B 0 /B 1 + maps were significantly smaller in Int-FB than in Seq-FB (Bonferroni-corrected P < 0.05 for all). The intersession CVs for the AFI and GRE images were also significantly smaller in Int-FB, Seq-BH, and qTx than in Seq-FB (Bonferroni-corrected P < 0.05 for all). The in-plane CVs for the AFI and GRE images in Seq-FB, Int-FB, and Seq-BH were significantly smaller than in qTx (Bonferroni-corrected P < 0.01 for all). Using interleaved acquisition during calibration scans of pTx for 7T brain MRI improved the repeatability of B 0 /B 1 + mapping, AFI, and GRE images, without BH. 1 Technical Efficacy Stage 1 J. Magn. Reson. Imaging 2017. © 2017 International Society for Magnetic Resonance in Medicine.
Montero, Carlos Segundo; Meneses, David Alberto; Alvarado, Fernando; Godoy, Wilmer; Rosero, Diana Isabel; Ruiz, Jose Manuel
2017-12-01
Multiple techniques are utilized for distal fixation in patients with neuromuscular scoliosis. Although there is evidence of benefit with S2 alar iliac (S2AI) fixation, this remains controversial. The objective of this study is to evaluate the radiological outcomes and complications associated with this surgical technique in a pediatric population. An observational retrospective case series study was performed. All pediatric patients between January 2011 and February 2014 diagnosed with neuromuscular scoliosis associated with pelvic obliquity, which required surgery with fixation unto S2AI, were included. Clinical, radiological findings, and adverse events were presented with measures of central tendency. Comparison of deformity correction was carried out using a non-parametric analysis for related samples (Wilcoxon signed-rank test). Significance was set at P<0.05. A total of 31 patients diagnosed with neuromuscular scoliosis that met inclusion criteria were analyzed. The leading cause of neuromuscular scoliosis in 23 (74.2%) patients was spastic cerebral palsy (CP). The correction of pelvic obliquity in the immediate postoperative period was of 76%, which is statistically significant. The extent of correction that patients maintained at the end of the follow-up was analyzed, and it was found that there were no significant differences in this magnitude, compared with the immediate postoperative pelvic obliquity. The mean follow-up time was 9±7 months. Regarding postoperative adverse events, occurred in 64.5% of patients, the most common outcome was pneumonia (14.8%). The overall rate of complications related to instrumentation was low (1.9%), which corresponds to one patient with an intra-articular screw in the left hip that required repositioning. S2AI fixation for the treatment of neuromuscular scoliosis is a safe alternative, in which the onset of adverse events is related to the comorbidities of patients instead of the surgical procedure itself. An approximate correction of 76% of pelvic obliquity is maintained during the follow-up.
Reliability model of a monopropellant auxiliary propulsion system
NASA Technical Reports Server (NTRS)
Greenberg, J. S.
1971-01-01
A mathematical model and associated computer code has been developed which computes the reliability of a monopropellant blowdown hydrazine spacecraft auxiliary propulsion system as a function of time. The propulsion system is used to adjust or modify the spacecraft orbit over an extended period of time. The multiple orbit corrections are the multiple objectives which the auxiliary propulsion system is designed to achieve. Thus the reliability model computes the probability of successfully accomplishing each of the desired orbit corrections. To accomplish this, the reliability model interfaces with a computer code that models the performance of a blowdown (unregulated) monopropellant auxiliary propulsion system. The computer code acts as a performance model and as such gives an accurate time history of the system operating parameters. The basic timing and status information is passed on to and utilized by the reliability model which establishes the probability of successfully accomplishing the orbit corrections.
Examination of Association to Autism of Common Genetic Variation in Genes Related to Dopamine
Anderson, B.M.; Schnetz-Boutaud, N.; Bartlett, J.; Wright, H.H.; Abramson, R.K.; Cuccaro, M.L.; Gilbert, J.R.; Pericak-Vance, M.A.; Haines, J.L.
2010-01-01
Autism is a severe neurodevelopmental disorder characterized by a triad of complications. Autistic individuals display significant disturbances in language and reciprocal social interactions, combined with repetitive and stereotypic behaviors. Prevalence studies suggest that autism is more common than originally believed, with recent estimates citing a rate of one in 150. Although this genomic approach has yielded multiple suggestive regions, a specific risk locus has yet to be identified and widely confirmed. Because many etiologies have been suggested for this complex syndrome, we hypothesize that one of the difficulties in identifying autism genes is that multiple genetic variants may be required to significantly increase the risk of developing autism. Thus we took the alternative approach of examining 14 prominent dopamine pathway candidate genes for detailed study by genotyping 28 SNPs. Although we did observe a nominally significant association for rs2239535 (p=.008) on chromosome 20, single locus analysis did not reveal any results as significant after correction for multiple comparisons. No significant interaction was identified when Multifactor Dimensionality Reduction (MDR) was employed to test specifically for multilocus effects. Although genome-wide linkage scans in autism have provided support for linkage to various loci along the dopamine pathway, our study does not provide strong evidence of linkage or association to any specific gene or combination of genes within the pathway. These results demonstrate that common genetic variation within the tested genes located within this pathway at most play a minor to moderate role in overall autism pathogenesis. PMID:19360691
Ochi, H; Ikuma, I; Toda, H; Shimada, T; Morioka, S; Moriyama, K
1989-12-01
In order to determine whether isovolumic relaxation period (IRP) reflects left ventricular relaxation under different afterload conditions, 17 anesthetized, open chest dogs were studied, and the left ventricular pressure decay time constant (T) was calculated. In 12 dogs, angiotensin II and nitroprusside were administered, with the heart rate constant at 90 beats/min. Multiple linear regression analysis showed that the aortic dicrotic notch pressure (AoDNP) and T were major determinants of IRP, while left ventricular end-diastolic pressure was a minor determinant. Multiple linear regression analysis, correlating T with IRP and AoDNP, did not further improve the correlation coefficient compared with that between T and IRP. We concluded that correction of the IRP by AoDNP is not necessary to predict T from additional multiple linear regression. The effects of ascending aortic constriction or angiotensin II on IRP were examined in five dogs, after pretreatment with propranolol. Aortic constriction caused a significant decrease in IRP and T, while angiotensin II produced a significant increase in IRP and T. IRP was affected by the change of afterload. However, the IRP and T values were always altered in the same direction. These results demonstrate that IRP is substituted for T and it reflects left ventricular relaxation even in different afterload conditions. We conclude that IRP is a simple parameter easily used to evaluate left ventricular relaxation in clinical situations.
ERIC Educational Resources Information Center
Vonkova, Hana; Zamarro, Gema; Hitt, Collin
2018-01-01
Self-reports are an indispensable source of information in education research but they are often affected by heterogeneity in reporting behavior. Failing to correct for this heterogeneity can lead to invalid comparisons across groups. The researchers use the parametric anchoring vignette method to correct for cross-country incomparability of…
A Comparison of Two Approaches to Correction of Restriction of Range in Correlation Analysis
ERIC Educational Resources Information Center
Wiberg, Marie; Sundstrom, Anna
2009-01-01
A common problem in predictive validity studies in the educational and psychological fields, e.g. in educational and employment selection, is restriction in range of the predictor variables. There are several methods for correcting correlations for restriction of range. The aim of this paper was to examine the usefulness of two approaches to…
A Comparison of EFL Teachers' and Students' Attitudes to Oral Corrective Feedback
ERIC Educational Resources Information Center
Roothooft, Hanne; Breeze, Ruth
2016-01-01
A relatively small number of studies on beliefs about oral corrective feedback (CF) have uncovered a mismatch between teachers' and students' attitudes which is potentially harmful to the language learning process, not only because students may become demotivated when their expectations are not met, but also because teachers appear to be reluctant…
ERIC Educational Resources Information Center
Pfaffel, Andreas; Schober, Barbara; Spiel, Christiane
2016-01-01
A common methodological problem in the evaluation of the predictive validity of selection methods, e.g. in educational and employment selection, is that the correlation between predictor and criterion is biased. Thorndike's (1949) formulas are commonly used to correct for this biased correlation. An alternative approach is to view the selection…
Children's Evaluation of the Certainty of Another Person's Inductive Inferences and Guesses
ERIC Educational Resources Information Center
Pillow, Bradford H.; Pearson, RaeAnne M.
2012-01-01
In three studies, 5-10-year-old children and an adult comparison group judged another's certainty in making inductive inferences and guesses. Participants observed a puppet make strong inductions, weak inductions, and guesses. Participants either had no information about the correctness of the puppet's conclusion, knew that the puppet was correct,…
Spelling Instruction in Spanish: A Comparison of Self-Correction, Visual Imagery and Copying
ERIC Educational Resources Information Center
Gaintza, Zuriñe; Goikoetxea, Edurne
2016-01-01
Two randomised control experiments examined spelling outcomes in a repeated measures design (pre-test, post-tests; 1-day, 1-month follow-up, 5-month follow-up), where students learned Spanish irregular words through (1) immediate feedback using self-correction, (2) visual imagery where children imagine and represent words using movement, and (3)…
A Comparison of Equality in Computer Algebra and Correctness in Mathematical Pedagogy (II)
ERIC Educational Resources Information Center
Bradford, Russell; Davenport, James H.; Sangwin, Chris
2010-01-01
A perennial problem in computer-aided assessment is that "a right answer", pedagogically speaking, is not the same thing as "a mathematically correct expression", as verified by a computer algebra system, or indeed other techniques such as random evaluation. Paper I in this series considered the difference in cases where there was "the right…
Brown, Angus M
2010-04-01
The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.
Iterative channel decoding of FEC-based multiple-description codes.
Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B
2012-03-01
Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.
Cooperative MIMO communication at wireless sensor network: an error correcting code approach.
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
Alió Del Barrio, Jorge L; Vargas, Verónica; Al-Shymali, Olena; Alió, Jorge L
2017-01-01
Small Incision Lenticule Extraction (SMILE) is a flap-free intrastromal technique for the correction of myopia and myopic astigmatism. To date, this technique lacks automated centration and cyclotorsion control, so several concerns have been raised regarding its capability to correct moderate or high levels of astigmatism. The objective of this paper is to review the reported SMILE outcomes for the correction of myopic astigmatism associated with a cylinder over 0.75 D, and its comparison with the outcomes reported with the excimer laser-based corneal refractive surgery techniques. A total of five studies clearly reporting SMILE astigmatic outcomes were identified. SMILE shows acceptable outcomes for the correction of myopic astigmatism, although a general agreement exists about the superiority of the excimer laser-based techniques for low to moderate levels of astigmatism. Manual correction of the static cyclotorsion should be adopted for any SMILE astigmatic correction over 0.75 D.
NASA Astrophysics Data System (ADS)
Muller, Dagmar; Krasemann, Hajo; Zuhilke, Marco; Doerffer, Roland; Brockmann, Carsten; Steinmetz, Francois; Valente, Andre; Brotas, Vanda; Grant, kMicheal G.; Sathyendranath, Shubha; Melin, Frederic; Franz, Bryan A.; Mazeran, Constant; Regner, Peter
2016-08-01
The Ocean Colour Climate Change Initiative (OC- CCI) provides a long-term time series of ocean colour data and investigates the detectable climate impact. A reliable and stable atmospheric correction (AC) procedure is the basis for ocean colour products of the necessary high quality.The selection of atmospheric correction processors is repeated regularly based on a round robin exercise, at the latest when a revised production and release of the OC-CCI merged product is scheduled. Most of the AC processors are under constant development and changes are implemented to improve the quality of satellite-derived retrievals of remote sensing reflectances. The changes between versions of the inter-comparison are not restricted to the implementation of AC processors. There are activities to improve the quality flagging for some processors, and the system vicarious calibration for AC algorithms in their sensor specific behaviour are widely studied. Each inter-comparison starts with an updated in-situ database, as more spectra are included in order to broaden the temporal and spatial range of satellite match-ups. While the OC-CCI's focus has laid on case-1 waters in the past, it has expanded to the retrieval of case-2 products now. In light of this goal, new bidirectional correction procedures (normalisation) for the remote sensing spectra have been introduced. As in-situ measurements are not always available at the satellite sensor specific central wave- lengths, a band-shift algorithm has to be applied to the dataset.In order to guarantee an objective selection from a set of four atmospheric correction processors, the common validation strategy of comparisons between in-situ and satellite-derived water leaving reflectance spectra, is aided by a ranking system. In principal, the statistical parameters are transformed into relative scores, which evaluate the relationship of quality dependent on the algorithms under study. The sensitivity of these scores to the selected database has been assessed by a bootstrapping exercise, which allows identification of the uncertainty in the scoring results.A comparison of round robin results for the OC-CCI version 2 and the current version 3 is presented and some major changes are highlighted.
Hayashi, Toshiyuki; Fukui, Tomoyasu; Nakanishi, Noriko; Yamamoto, Saki; Tomoyasu, Masako; Osamura, Anna; Ohara, Makoto; Yamamoto, Takeshi; Ito, Yasuki; Hirano, Tsutomu
2017-11-13
Following publication of the original article [1], the authors identified a number of errors. In Result (P.3), Table 1 (P.4), Table 5 (P.9) and Supplementary Table 1, the correct unit for adiponectin was μg/mL. In Table 1 (P.4), the correct value for the post treatment body weight in dapagliflozin was 76.2±14.8. In Table 6 (P.10), the correct value for the pre treatment sd LDL/LDL-C in decreased LDL-C group was 0.38±0.10.
NASA Astrophysics Data System (ADS)
Abdelsalam, D. G.; Shaalan, M. S.; Eloker, M. M.; Kim, Daesuk
2010-06-01
In this paper a method is presented to accurately measure the radius of curvature of different types of curved surfaces of different radii of curvatures of 38 000,18 000 and 8000 mm using multiple-beam interference fringes in reflection. The images captured by the digital detector were corrected by flat fielding method. The corrected images were analyzed and the form of the surfaces was obtained. A 3D profile for the three types of surfaces was obtained using Zernike polynomial fitting. Some sources of uncertainty in measurement were calculated by means of ray tracing simulations and the uncertainty budget was estimated within λ/40.
Szatkiewicz, Jin P; Wang, WeiBo; Sullivan, Patrick F; Wang, Wei; Sun, Wei
2013-02-01
Structural variation is an important class of genetic variation in mammals. High-throughput sequencing (HTS) technologies promise to revolutionize copy-number variation (CNV) detection but present substantial analytic challenges. Converging evidence suggests that multiple types of CNV-informative data (e.g. read-depth, read-pair, split-read) need be considered, and that sophisticated methods are needed for more accurate CNV detection. We observed that various sources of experimental biases in HTS confound read-depth estimation, and note that bias correction has not been adequately addressed by existing methods. We present a novel read-depth-based method, GENSENG, which uses a hidden Markov model and negative binomial regression framework to identify regions of discrete copy-number changes while simultaneously accounting for the effects of multiple confounders. Based on extensive calibration using multiple HTS data sets, we conclude that our method outperforms existing read-depth-based CNV detection algorithms. The concept of simultaneous bias correction and CNV detection can serve as a basis for combining read-depth with other types of information such as read-pair or split-read in a single analysis. A user-friendly and computationally efficient implementation of our method is freely available.
Adhi, Mohammad Idrees; Aly, Syed Moyn
2018-04-01
To find differences between One-Correct and One-Best multiple-choice questions with relation to student scores, post-exam item analyses results and student perception. This comparative cross-sectional study was conducted at the Dow University of Health Sciences, Karachi, from November 2010 to April 2011, and comprised medical students. Data was analysed using SPSS 18. Of the 207 participants, 16(7.7%) were boys and 191(92.3%) were girls. The mean score in Paper I was 18.62±4.7, while in Paper II it was 19.58±6.1. One-Best multiple-choice questions performed better than One-Correct. There was no statistically significant difference in the mean scores of the two papers or in the difficulty indices. Difficulty and discrimination indices correlated well in both papers. Cronbach's alpha of paper I was 0.584 and that of paper II was 0.696. Point-biserial values were better for paper II than for paper I. Most students expressed dissatisfaction with paper II. One-Best multiple-choice questions showed better scores, higher reliability, better item performance and correlation values.
Habit Breaking Appliance for Multiple Corrections
Abraham, Reji; Kamath, Geetha; Sodhi, Jasmeet Singh; Sodhi, Sonia; Rita, Chandki; Sai Kalyan, S.
2013-01-01
Tongue thrusting and thumb sucking are the most commonly seen oral habits which act as the major etiological factors in the development of dental malocclusion. This case report describes a fixed habit correcting appliance, Hybrid Habit Correcting Appliance (HHCA), designed to eliminate these habits. This hybrid appliance is effective in less compliant patients and if desired can be used along with the fixed orthodontic appliance. Its components can act as mechanical restrainers and muscle retraining devices. It is also effective in cases with mild posterior crossbites. PMID:24198976
Treatment of hypophosphatemia in the intensive care unit: a review
2010-01-01
Introduction Currently no evidence-based guideline exists for the approach to hypophosphatemia in critically ill patients. Methods We performed a narrative review of the medical literature to identify the incidence, symptoms, and treatment of hypophosphatemia in critically ill patients. Specifically, we searched for answers to the questions whether correction of hypophosphatemia is associated with improved outcome, and whether a certain treatment strategy is superior. Results Incidence: hypophosphatemia is frequently encountered in the intensive care unit; and critically ill patients are at increased risk for developing hypophosphatemia due to the presence of multiple causal factors. Symptoms: hypophosphatemia may lead to a multitude of symptoms, including cardiac and respiratory failure. Treatment: hypophosphatemia is generally corrected when it is symptomatic or severe. However, although multiple studies confirm the efficacy and safety of intravenous phosphate administration, it remains uncertain when and how to correct hypophosphatemia. Outcome: in some studies, hypophosphatemia was associated with higher mortality; a paucity of randomized controlled evidence exists for whether correction of hypophosphatemia improves the outcome in critically ill patients. Conclusions Additional studies addressing the current approach to hypophosphatemia in critically ill patients are required. Studies should focus on the association between hypophosphatemia and morbidity and/or mortality, as well as the effect of correction of this electrolyte disorder. PMID:20682049
Unsteady loads due to propulsive lift configurations. Part A: Investigation of scaling laws
NASA Technical Reports Server (NTRS)
Morton, J. B.; Haviland, J. K.
1978-01-01
This study covered scaling laws, and pressure measurements made to determine details of the large scale jet structure and to verify scaling laws by direct comparison. The basis of comparison was a test facility at NASA Langley in which a JT-15D exhausted over a boilerplater airfoil surface to reproduce upper surface blowing conditions. A quarter scale model was built of this facility, using cold jets. A comparison between full scale and model pressure coefficient spectra, presented as functions of Strouhal numbers, showed fair agreement, however, a shift of spectral peaks was noted. This was not believed to be due to Mach number or Reynolds number effects, but did appear to be traceable to discrepancies in jet temperatures. A correction for jet temperature was then tried, similar to one used for far field noise prediction. This was found to correct the spectral peak discrepancy.
From LIMS to OMPS-LP: limb ozone observations for future reanalyses
NASA Astrophysics Data System (ADS)
Wargan, K.; Kramarova, N. A.; Remsberg, E. E.; Coy, L.; Harvey, L.; Livesey, N. J.; Pawson, S.
2017-12-01
High vertical resolution and accuracy of ozone data from satellite-borne limb sounders have made them an invaluable tool in scientific studies of the middle and upper atmosphere. However, it was not until recently that these measurements were successfully incorporated in atmospheric reanalyses: of the major multidecadal reanalyses only ECMWF's ERA-Interim/ERA5 and NASA's MERRA-2 use limb ozone data. Validation and comparison studies have demonstrated that the addition of observations from the Microwave Limb Sounder (MLS) on EOS Aura greatly improved the quality of ozone fields in MERRA-2 making these assimilated data sets useful for scientific research. In this presentation, we will show the results of test experiments assimilating retrieved ozone from the Limb Infrared Monitor of the Stratosphere (LIMS, 1978/1979) and Ozone Mapping Profiler Suite Limb Profiler (OMPS-LP, 2012 to present). Our approach builds on the established assimilation methodology used for MLS in MERRA-2 and, in the case of OMPS-LP, extends the excellent record of MLS ozone assimilation into the post-EOS era in Earth observations. We will show case studies, discuss comparisons of the new experiments with MERRA-2, strategies for bias correction and the potential for combined assimilation of multiple limb ozone data types in future reanalyses for studies of multidecadal stratospheric ozone changes including trends.
Genotyping of HLA-I and HLA-II alleles in Chinese patients with paraneoplastic pemphigus.
Liu, Q; Bu, D-F; Li, D; Zhu, X-J
2008-03-01
Class I and class II HLA genes are thought to play a role in the immunopathogenesis of bullous dermatoses such as pemphigus vulgaris and pemphigus foliaceus, but we know little about the genetic background of paraneoplastic pemphigus (PNP) in Chinese patients. To identify class I and class II HLA alleles by genotyping in Chinese patients with PNP, and to find out the possible association between HLA alleles and disease susceptibility. Nineteen Chinese patients with PNP were enrolled in this study. HLA-A, B, C, DRB1 and DQB1 alleles were typed by polymerase chain reaction and a colour-coded sequence-specific oligonucleotide probes method. The frequencies of HLA-B*4002/B*4004, B*51, B*52, Cw*14, DQB1*0301, DRB1*08 and DRB1*11 were relatively prevalent in Chinese Han patients with PNP in comparison with normal controls. After correction for multiple comparisons, Cw*14 remained statistically significant, and the other alleles were unremarkable in these patients. The genetic background predisposing to PNP may be different in patients from various races and areas. HLA-Cw*14 may be the predisposing allele to PNP in Chinese patients, which is different from the predisposing allele in French patients with PNP and the alleles predisposing to pemphigus vulgaris and pemphigus foliaceus.
Chie, Wei-Chu; Blazeby, Jane M; Hsiao, Chin-Fu; Chiu, Herng-Chia; Poon, Ronnie T; Mikoshiba, Naoko; Al-Kadhim, Gillian; Heaton, Nigel; Calara, Jozer; Collins, Peter; Caddick, Katharine; Costantini, Anna; Vilgrain, Valerie
2017-10-01
The aim of this study is to explore the possible effects of clinical and cultural characteristics of hepatocellular carcinoma on patients' health-related quality of life (HRQoL). Patients with hepatocellular carcinoma from Asian and European countries completed the EORTC QLQ-C30 and the EORTC QLQ-HCC18. Comparisons were made using Student's t-test and Wilcoxon rank-sum test with method of false discovery to correct multiple comparisons. Multiway analysis of variance and model selection were used to assess the effects of clinical characteristics and geographic areas. Two hundred and twenty-seven patients with hepatocellular carcinoma completed questionnaires. After adjusting for demographic and clinical characteristics, Asian patients still had significantly better HRQoL scores in emotional functioning, insomnia, (QLQ-C30) and in sexual interest (QLQ-HCC18). We also found an interaction in physical functioning (QLQ-C30) and fatigue (QLQ-HCC18) between geographic region and marital status, married European had worse HRQoL scores than Asian singles. Both clinical characteristics and geographic areas affected the HRQoL in with hepatocellular carcinoma. Cultural differences and clinical differences in the pattern of disease due to active surveillance of Asian countries may explain the results. © 2016 John Wiley & Sons Australia, Ltd.
Safe driving and executive functions in healthy middle-aged drivers.
León-Domínguez, Umberto; Solís-Marcos, Ignacio; Barrio-Álvarez, Elena; Barroso Y Martín, Juan Manuel; León-Carrión, José
2017-01-01
The introduction of the point system driver's license in several European countries could offer a valid framework for evaluating driving skills. This is the first study to use this framework to assess the functional integrity of executive functions in middle-aged drivers with full points, partial points or no points on their driver's license (N = 270). The purpose of this study is to find differences in executive functions that could be determinants in safe driving. Cognitive tests were used to assess attention processes, processing speed, planning, cognitive flexibility, and inhibitory control. Analyses for covariance (ANCOVAS) were used for group comparisons while adjusting for education level. The Bonferroni method was used for correcting for multiple comparisons. Overall, drivers with the full points on their license showed better scores than the other two groups. In particular, significant differences were found in reaction times on Simple and Conditioned Attention tasks (both p-values < 0.001) and in number of type-III errors on the Tower of Hanoi task (p = 0.026). Differences in reaction time on attention tasks could serve as neuropsychological markers for safe driving. Further analysis should be conducted in order to determine the behavioral impact of impaired executive functioning on driving ability.
Aad, G.; Abbott, B.; Abdallah, J.; ...
2011-09-20
Jets are identified and their properties studied in center-of-mass energy √s = 7 TeV proton-proton collisions at the Large Hadron Collider using charged particles measured by the ATLAS inner detector. Events are selected using a minimum bias trigger, allowing jets at very low transverse momentum to be observed and their characteristics in the transition to high-momentum fully perturbative jets to be studied. Jets are reconstructed using the anti-k t algorithm applied to charged particles with two radius parameter choices, 0.4 and 0.6. An inclusive charged jet transverse momentum cross section measurement from 4 GeV to 100 GeV is shown formore » four ranges in rapidity extending to 1.9 and corrected to charged particle-level truth jets. The transverse momenta and longitudinal momentum fractions of charged particles within jets are measured, along with the charged particle multiplicity and the particle density as a function of radial distance from the jet axis. Comparison of the data with the theoretical models implemented in existing tunings of Monte Carlo event generators indicates reasonable overall agreement between data and Monte Carlo. In conclusion, these comparisons are sensitive to Monte Carlo parton showering, hadronization, and soft physics models.« less
Sulzberger Ice Shelf Tidal Signal Reconstruction Using InSAR
NASA Astrophysics Data System (ADS)
Baek, S.; Shum, C.; Yi, Y.; Kwoun, O.; Lu, Z.; Braun, A.
2005-12-01
Synthetic Aperture Radar Interferometry (InSAR) and Differential InSAR (DInSAR) have been demonstrated as useful techniques to detect surface deformation over ice sheet and ice shelves over Antarctica. In this study, we use multiple-pass InSAR from the ERS-1 and ERS-2 data to detect ocean tidal deformation with an attempt towards modeling of tides underneath an ice shelf. High resolution Digital Elevation Model (DEM) from repeat-pass interferometry and ICESat profiles as ground control points is used for topographic correction over the study region in Sulzberger Ice Shelf, West Antarctica. Tidal differences measured by InSAR are obtained by the phase difference between a point on the grounded ice and a point on ice shelf. Comparison with global or regional tide models (including NAO, TPXO, GOT, and CATS) of a selected point shows that the tidal amplitude is consistent with the values predicted from tide models to within 4 cm RMS. Even though the lack of data hinders the effort to readily develop a tide model using longer term data (time series span over years), we suggest a method to reconstruction selected tidal constituents using both vertical deformation from InSAR and the knowledge on aliased tidal frequencies from ERS satellites. Finally, we report the comparison results of tidal deformation observed by InSAR and ICESat altimetry.
Plante, David T; Jensen, J Eric; Schoerning, Laura; Winkelman, John W
2012-01-01
Insomnia is closely related to major depressive disorder (MDD) both cross-sectionally and longitudinally, and as such, offers potential opportunities to refine our understanding of the neurobiology of both sleep and mood disorders. Clinical and basic science data suggest a role for reduced γ-aminobutyric acid (GABA) in both MDD and primary insomnia (PI). Here, we have utilized single-voxel proton magnetic spectroscopy (1H-MRS) at 4 Tesla to examine GABA relative to total creatine (GABA/Cr) in the occipital cortex (OC), anterior cingulate cortex (ACC), and thalamus in 20 non-medicated adults with PI (12 women) and 20 age- and sex-matched healthy sleeper comparison subjects. PI subjects had significantly lower GABA/Cr in the OC (p=0.0005) and ACC (p=0.03) compared with healthy sleepers. There was no significant difference in thalamic GABA/Cr between groups. After correction for multiple comparisons, GABA/Cr did not correlate significantly with insomnia severity measures among PI subjects. This study is the first to demonstrate regional reductions of GABA in PI in the OC and ACC. Reductions in GABA in similar brain regions in MDD using 1H-MRS suggest a common reduction in cortical GABA among PI and mood disorders. PMID:22318195
Tong, Yunxia; Chen, Qiang; Nichols, Thomas E.; Rasetti, Roberta; Callicott, Joseph H.; Berman, Karen F.; Weinberger, Daniel R.; Mattay, Venkata S.
2016-01-01
A data-driven hypothesis-free genome-wide association (GWA) approach in imaging genetics studies allows screening the entire genome to discover novel genes that modulate brain structure, chemistry, and function. However, a whole brain voxel-wise analysis approach in such genome-wide based imaging genetic studies can be computationally intense and also likely has low statistical power since a stringent multiple comparisons correction is needed for searching over the entire genome and brain. In imaging genetics with functional magnetic resonance imaging (fMRI) phenotypes, since many experimental paradigms activate focal regions that can be pre-specified based on a priori knowledge, reducing the voxel-wise search to single-value summary measures within a priori ROIs could prove efficient and promising. The goal of this investigation is to evaluate the sensitivity and reliability of different single-value ROI summary measures and provide guidance in future work. Four different fMRI databases were tested and comparisons across different groups (patients with schizophrenia, their siblings, vs. normal control subjects; across genotype groups) were conducted. Our results show that four of these measures, particularly those that represent values from the top most-activated voxels within an ROI are more powerful at reliably detecting group differences and generating greater effect sizes than the others. PMID:26974435
Biometrics for electronic health records.
Flores Zuniga, Alejandro Enrique; Win, Khin Than; Susilo, Willy
2010-10-01
Securing electronic health records, in scenarios in which the provision of care services is share among multiple actors, could become a complex and costly activity. Correct identification of patients and physician, protection of privacy and confidentiality, assignment of access permissions for healthcare providers and resolutions of conflicts rise as main points of concern in the development of interconnected health information networks. Biometric technologies have been proposed as a possible technological solution for these issues due to its ability to provide a mechanism for unique verification of an individual identity. This paper presents an analysis of the benefit as well as disadvantages offered by biometric technology. A comparison between this technology and more traditional identification methods is used to determine the key benefits and flaws of the use biometric in health information systems. The comparison as been made considering the viability of the technologies for medical environments, global security needs, the contemplation of a share care environment and the costs involved in the implementation and maintenance of such technologies. This paper also discusses alternative uses for biometrics technologies in health care environments. The outcome of this analysis lays in the fact that even when biometric technologies offer several advantages over traditional method of identification, they are still in the early stages of providing a suitable solution for a health care environment.
Extended Glauert tip correction to include vortex rollup effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maniaci, David; Schmitz, Sven
Wind turbine loads predictions by blade-element momentum theory using the standard tip-loss correction have been shown to over-predict loading near the blade tip in comparison to experimental data. This over-prediction is theorized to be due to the assumption of light rotor loading, inherent in the standard tip-loss correction model of Glauert. A higher- order free-wake method, WindDVE, is used to compute the rollup process of the trailing vortex sheets downstream of wind turbine blades. Results obtained serve an exact correction function to the Glauert tip correction used in blade-element momentum methods. Lastly, it is found that accounting for the effectsmore » of tip vortex rollup within the Glauert tip correction indeed results in improved prediction of blade tip loads computed by blade-element momentum methods.« less
Extended Glauert tip correction to include vortex rollup effects
Maniaci, David; Schmitz, Sven
2016-10-03
Wind turbine loads predictions by blade-element momentum theory using the standard tip-loss correction have been shown to over-predict loading near the blade tip in comparison to experimental data. This over-prediction is theorized to be due to the assumption of light rotor loading, inherent in the standard tip-loss correction model of Glauert. A higher- order free-wake method, WindDVE, is used to compute the rollup process of the trailing vortex sheets downstream of wind turbine blades. Results obtained serve an exact correction function to the Glauert tip correction used in blade-element momentum methods. Lastly, it is found that accounting for the effectsmore » of tip vortex rollup within the Glauert tip correction indeed results in improved prediction of blade tip loads computed by blade-element momentum methods.« less
NASA Astrophysics Data System (ADS)
Ganapathy, Vinay; Ramachandran, Ramesh
2017-10-01
The response of a quadrupolar nucleus (nuclear spin with I > 1/2) to an oscillating radio-frequency pulse/field is delicately dependent on the ratio of the quadrupolar coupling constant to the amplitude of the pulse in addition to its duration and oscillating frequency. Consequently, analytic description of the excitation process in the density operator formalism has remained less transparent within existing theoretical frameworks. As an alternative, the utility of the "concept of effective Floquet Hamiltonians" is explored in the present study to explicate the nuances of the excitation process in multilevel systems. Employing spin I = 3/2 as a case study, a unified theoretical framework for describing the excitation of multiple-quantum transitions in static isotropic and anisotropic solids is proposed within the framework of perturbation theory. The challenges resulting from the anisotropic nature of the quadrupolar interactions are addressed within the effective Hamiltonian framework. The possible role of the various interaction frames on the convergence of the perturbation corrections is discussed along with a proposal for a "hybrid method" for describing the excitation process in anisotropic solids. Employing suitable model systems, the validity of the proposed hybrid method is substantiated through a rigorous comparison between simulations emerging from exact numerical and analytic methods.
Measuring Thermodynamic Properties of Metals and Alloys With Knudsen Effusion Mass Spectrometry
NASA Technical Reports Server (NTRS)
Copland, Evan H.; Jacobson, Nathan S.
2010-01-01
This report reviews Knudsen effusion mass spectrometry (KEMS) as it relates to thermodynamic measurements of metals and alloys. First, general aspects are reviewed, with emphasis on the Knudsen-cell vapor source and molecular beam formation, and mass spectrometry issues germane to this type of instrument are discussed briefly. The relationship between the vapor pressure inside the effusion cell and the measured ion intensity is the key to KEMS and is derived in detail. Then common methods used to determine thermodynamic quantities with KEMS are discussed. Enthalpies of vaporization, the fundamental measurement, are determined from the variation of relative partial pressure with temperature using the second-law method or by calculating a free energy of formation and subtracting the entropy contribution using the third-law method. For single-cell KEMS instruments, measurements can be used to determine the partial Gibbs free energy if the sensitivity factor remains constant over multiple experiments. The ion-current ratio method and dimer-monomer method are also viable in some systems. For a multiple-cell KEMS instrument, activities are obtained by direct comparison with a suitable component reference state or a secondary standard. Internal checks for correct instrument operation and general procedural guidelines also are discussed. Finally, general comments are made about future directions in measuring alloy thermodynamics with KEMS.
Examination of association of genes in the serotonin system to autism.
Anderson, B M; Schnetz-Boutaud, N C; Bartlett, J; Wotawa, A M; Wright, H H; Abramson, R K; Cuccaro, M L; Gilbert, J R; Pericak-Vance, M A; Haines, J L
2009-07-01
Autism is characterized as one of the pervasive developmental disorders, a spectrum of often severe behavioral and cognitive disturbances of early development. The high heritability of autism has driven multiple efforts to identify genetic variation that increases autism susceptibility. Numerous studies have suggested that variation in peripheral and central metabolism of serotonin (5-hydroxytryptamine) may play a role in the pathophysiology of autism. We screened 403 autism families for 45 single nucleotide polymorphisms in ten serotonin pathway candidate genes. Although genome-wide linkage scans in autism have provided support for linkage to various loci located within the serotonin pathway, our study does not provide strong evidence for linkage to any specific gene within the pathway. The most significant association (p = 0.0002; p = 0.02 after correcting for multiple comparisons) was found at rs1150220 (HTR3A) located on chromosome 11 ( approximately 113 Mb). To test specifically for multilocus effects, multifactor dimensionality reduction was employed, and a significant two-way interaction (p value = 0.01) was found between rs10830962, near MTNR1B (chromosome11; 92,338,075 bp), and rs1007631, near SLC7A5 (chromosome16; 86,413,596 bp). These data suggest that variation within genes on the serotonin pathway, particularly HTR3A, may have modest effects on autism risk.
Kazubke, Edda; Schüttpelz-Brauns, Katrin
2010-01-01
Background: Multiple choice questions (MCQs) are often used in exams of medical education and need careful quality management for example by the application of review committees. This study investigates whether groups communicating virtually by email are similar to face-to-face groups concerning their review process performance and whether a facilitator has positive effects. Methods: 16 small groups of students were examined, which had to evaluate and correct MCQs under four different conditions. In the second part of the investigation the changed questions were given to a new random sample for the judgement of the item quality. Results: There was no significant influence of the variables “form of review committee” and “facilitation”. However, face-to-face and virtual groups clearly differed in the required treatment times. The test condition “face to face without facilitation” was generally valued most positively concerning taking over responsibility, approach to work, sense of well-being, motivation and concentration on the task. Discussion: Face-to-face and virtual groups are equally effective in the review of MCQs but differ concerning their efficiency. The application of electronic review seems to be possible but is hardly recommendable because of the long process time and technical problems. PMID:21818213
Liu, Peiwei; Feng, Tingyong
2018-05-09
Procrastination is an almost universal affliction, which occurs across culture and brings serious consequences across multiple fields, such as finance, health and education. Previous research has showed procrastination can be influenced by future time perspective (FTP). However, little is known about the neural basis underlying the impact of FTP on procrastination. To address this question, we used voxel-based morphometry (VBM) based on brain structure. In line with previous findings, the behavioral result indicated that FTP inventory scores were significantly negatively correlated with procrastination inventory scores (r = -0.63, n = 160). The whole-brain VBM results showed that FTP scores were significantly negatively correlated with the grey matter (GM) volumes of the parahippocampal gyrus (paraPHC) and ventromedial prefrontal cortex (vmPFC) after the multiple comparisons correction. Furthermore, mediation analyses revealed that the effect of GM volumes of the paraPHC and vmPFC on procrastination was mediated by FTP. These results suggested that paraPHC and vmPFC, the critical brain regions about episodic future thinking, could be the neural basis responsible for the impact of FTP on procrastination. The present study extends our knowledge on procrastination, and provides a novel perspective to understand the relationship between FTP and procrastination.
Goradia, Dhruman D; Vogel, Sherry; Mohl, Brianne; Khatib, Dalal; Zajac-Benitez, Caroline; Rajan, Usha; Robin, Arthur; Rosenberg, David R; Stanley, Jeffrey A
2016-12-30
There is evidence of greater cognitive deficits in attention deficit hyperactivity disorder with a comorbid reading disability (ADHD/+RD) compared to ADHD alone (ADHD/-RD). Additionally, the striatum has been consistently implicated in ADHD. However, the extent of morphological alterations in the striatum of ADHD/+RD is poorly understood, which is the main purpose of this study. Based on structural MRI images, the surface deformation of the caudate and putamen was assessed in 59 boys matching in age and IQ [19 ADHD/-RD, 15 ADHD/+RD and 25 typically developing controls (TDC)]. A vertex based analysis with multiple comparison correction was conducted to compare ADHD/-RD and ADHD/+RD to TDC. Compared to TDC, ADHD/+RD showed multiple bilateral significant clusters of surface compression. In contrast, ADHD/-RD showed fewer significant clusters of surface compression and restricted to the left side. Regarding the putamen, only ADHD/-RD showed significant clusters of surface compression. Results demonstrate for the first time a greater extent of morphological alterations in the caudate of ADHD/+RD than ADHD/-RD compared to TDC, which may suggest greater implicated cortical areas projecting to the caudate that are associated with the greater neuropsychological impairments observed in ADHD/+RD. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Two-dimensional correlation spectroscopy — Biannual survey 2007-2009
NASA Astrophysics Data System (ADS)
Noda, Isao
2010-06-01
The publication activities in the field of 2D correlation spectroscopy are surveyed with the emphasis on papers published during the last two years. Pertinent review articles and conference proceedings are discussed first, followed by the examination of noteworthy developments in the theory and applications of 2D correlation spectroscopy. Specific topics of interest include Pareto scaling, analysis of randomly sampled spectra, 2D analysis of data obtained under multiple perturbations, evolution of 2D spectra along additional variables, comparison and quantitative analysis of multiple 2D spectra, orthogonal sample design to eliminate interfering cross peaks, quadrature orthogonal signal correction and other data transformation techniques, data pretreatment methods, moving window analysis, extension of kernel and global phase angle analysis, covariance and correlation coefficient mapping, variant forms of sample-sample correlation, and different display methods. Various static and dynamic perturbation methods used in 2D correlation spectroscopy, e.g., temperature, composition, chemical reactions, H/D exchange, physical phenomena like sorption, diffusion and phase transitions, optical and biological processes, are reviewed. Analytical probes used in 2D correlation spectroscopy include IR, Raman, NIR, NMR, X-ray, mass spectrometry, chromatography, and others. Application areas of 2D correlation spectroscopy are diverse, encompassing synthetic and natural polymers, liquid crystals, proteins and peptides, biomaterials, pharmaceuticals, food and agricultural products, solutions, colloids, surfaces, and the like.
Gaissmaier, Wolfgang; Giese, Helge; Galesic, Mirta; Garcia-Retamero, Rocio; Kasper, Juergen; Kleiter, Ingo; Meuth, Sven G; Köpke, Sascha; Heesen, Christoph
2018-01-01
A shared decision-making approach is suggested for multiple sclerosis (MS) patients. To properly evaluate benefits and risks of different treatment options accordingly, MS patients require sufficient numeracy - the ability to understand quantitative information. It is unknown whether MS affects numeracy. Therefore, we investigated whether patients' numeracy was impaired compared to a probabilistic national sample. As part of the larger prospective, observational, multicenter study PERCEPT, we assessed numeracy for a clinical study sample of German MS patients (N=725) with a standard test and compared them to a German probabilistic sample (N=1001), controlling for age, sex, and education. Within patients, we assessed whether disease variables (disease duration, disability, annual relapse rate, cognitive impairment) predicted numeracy beyond these demographics. MS patients showed a comparable level of numeracy as the probabilistic national sample (68.9% vs. 68.5% correct answers, P=0.831). In both samples, numeracy was higher for men and the highly educated. Disease variables did not predict numeracy beyond demographics within patients, and predictability was generally low. This sample of MS patients understood quantitative information on the same level as the general population. There is no reason to withhold quantitative information from MS patients. Copyright © 2017 Elsevier B.V. All rights reserved.
Chhapola, Viswas; Kanwal, Sandeep Kumar; Brar, Rekha
2015-05-01
To carry out a cross-sectional survey of the medical literature on laboratory research papers published later than 2012 and available in the common search engines (PubMed, Google Scholar) on the quality of statistical reporting of method comparison studies using Bland-Altman (B-A) analysis. Fifty clinical studies were identified which had undertaken method comparison of laboratory analytes using B-A. The reporting of B-A was evaluated using a predesigned checklist with following six items: (1) correct representation of x-axis on B-A plot, (2) representation and correct definition of limits of agreement (LOA), (3) reporting of confidence interval (CI) of LOA, (4) comparison of LOA with a priori defined clinical criteria, (5) evaluation of the pattern of the relationship between difference (y-axis) and average (x-axis) and (6) measures of repeatability. The x-axis and LOA were presented correctly in 94%, comparison with a priori clinical criteria in 74%, CI reporting in 6%, evaluation of pattern in 28% and repeatability assessment in 38% of studies. There is incomplete reporting of B-A in published clinical studies. Despite its simplicity, B-A appears not to be completely understood by researchers, reviewers and editors of journals. There appear to be differences in the reporting of B-A between laboratory medicine journals and other clinical journals. A uniform reporting of B-A method will enhance the generalizability of results. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Further dissociating the processes involved in recognition memory: an FMRI study.
Henson, Richard N A; Hornberger, Michael; Rugg, Michael D
2005-07-01
Based on an event-related potential study by Rugg et al. [Dissociation of the neural correlates of implicit and explicit memory. Nature, 392, 595-598, 1998], we attempted to isolate the hemodynamic correlates of recollection, familiarity, and implicit memory within a single verbal recognition memory task using event-related fMRI. Words were randomly cued for either deep or shallow processing, and then intermixed with new words for yes/no recognition. The number of studied words was such that, whereas most were recognized ("hits"), an appreciable number of shallow-studied words were not ("misses"). Comparison of deep hits versus shallow hits at test revealed activations in regions including the left inferior parietal gyrus. Comparison of shallow hits versus shallow misses revealed activations in regions including the bilateral intraparietal sulci, the left posterior middle frontal gyrus, and the left frontopolar cortex. Comparison of hits versus correct rejections revealed a relative deactivation in an anterior left medial-temporal region (most likely the perirhinal cortex). Comparison of shallow misses versus correct rejections did not reveal response decreases in any regions expected on the basis of previous imaging studies of priming. Given these and previous data, we associate the left inferior parietal activation with recollection, the left anterior medial-temporal deactivation with familiarity, and the intraparietal and prefrontal responses with target detection. The absence of differences between shallow misses and correct rejections means that the hemodynamic correlates of implicit memory remain unclear.
A Bayesian Missing Data Framework for Generalized Multiple Outcome Mixed Treatment Comparisons
ERIC Educational Resources Information Center
Hong, Hwanhee; Chu, Haitao; Zhang, Jing; Carlin, Bradley P.
2016-01-01
Bayesian statistical approaches to mixed treatment comparisons (MTCs) are becoming more popular because of their flexibility and interpretability. Many randomized clinical trials report multiple outcomes with possible inherent correlations. Moreover, MTC data are typically sparse (although richer than standard meta-analysis, comparing only two…
A Fiducial Approach to Extremes and Multiple Comparisons
ERIC Educational Resources Information Center
Wandler, Damian V.
2010-01-01
Generalized fiducial inference is a powerful tool for many difficult problems. Based on an extension of R. A. Fisher's work, we used generalized fiducial inference for two extreme value problems and a multiple comparison procedure. The first extreme value problem is dealing with the generalized Pareto distribution. The generalized Pareto…
Reporting of analyses from randomized controlled trials with multiple arms: a systematic review.
Baron, Gabriel; Perrodeau, Elodie; Boutron, Isabelle; Ravaud, Philippe
2013-03-27
Multiple-arm randomized trials can be more complex in their design, data analysis, and result reporting than two-arm trials. We conducted a systematic review to assess the reporting of analyses in reports of randomized controlled trials (RCTs) with multiple arms. The literature in the MEDLINE database was searched for reports of RCTs with multiple arms published in 2009 in the core clinical journals. Two reviewers extracted data using a standardized extraction form. In total, 298 reports were identified. Descriptions of the baseline characteristics and outcomes per group were missing in 45 reports (15.1%) and 48 reports (16.1%), respectively. More than half of the articles (n = 171, 57.4%) reported that a planned global test comparison was used (that is, assessment of the global differences between all groups), but 67 (39.2%) of these 171 articles did not report details of the planned analysis. Of the 116 articles reporting a global comparison test, 12 (10.3%) did not report the analysis as planned. In all, 60% of publications (n = 180) described planned pairwise test comparisons (that is, assessment of the difference between two groups), but 20 of these 180 articles (11.1%) did not report the pairwise test comparisons. Of the 204 articles reporting pairwise test comparisons, the comparisons were not planned for 44 (21.6%) of them. Less than half the reports (n = 137; 46%) provided baseline and outcome data per arm and reported the analysis as planned. Our findings highlight discrepancies between the planning and reporting of analyses in reports of multiple-arm trials.
Multimodal Randomized Functional MR Imaging of the Effects of Methylene Blue in the Human Brain
Rodriguez, Pavel; Zhou, Wei; Barrett, Douglas W.; Altmeyer, Wilson; Gutierrez, Juan E.; Li, Jinqi; Lancaster, Jack L.; Gonzalez-Lima, Francisco
2016-01-01
Purpose To investigate the sustained-attention and memory-enhancing neural correlates of the oral administration of methylene blue in the healthy human brain. Materials and Methods The institutional review board approved this prospective, HIPAA-compliant, randomized, double-blinded, placebo-controlled clinical trial, and all patients provided informed consent. Twenty-six subjects (age range, 22–62 years) were enrolled. Functional magnetic resonance (MR) imaging was performed with a psychomotor vigilance task (sustained attention) and delayed match-to-sample tasks (short-term memory) before and 1 hour after administration of low-dose methylene blue or a placebo. Cerebrovascular reactivity effects were also measured with the carbon dioxide challenge, in which a 2 × 2 repeated-measures analysis of variance was performed with a drug (methylene blue vs placebo) and time (before vs after administration of the drug) as factors to assess drug × time between group interactions. Multiple comparison correction was applied, with cluster-corrected P < .05 indicating a significant difference. Results Administration of methylene blue increased response in the bilateral insular cortex during a psychomotor vigilance task (Z = 2.9–3.4, P = .01–.008) and functional MR imaging response during a short-term memory task involving the prefrontal, parietal, and occipital cortex (Z = 2.9–4.2, P = .03–.0003). Methylene blue was also associated with a 7% increase in correct responses during memory retrieval (P = .01). Conclusion Low-dose methylene blue can increase functional MR imaging activity during sustained attention and short-term memory tasks and enhance memory retrieval. © RSNA, 2016 Online supplemental material is available for this article. PMID:27351678
Seismic wavefield propagation in 2D anisotropic media: Ray theory versus wave-equation simulation
NASA Astrophysics Data System (ADS)
Bai, Chao-ying; Hu, Guang-yi; Zhang, Yan-teng; Li, Zhong-sheng
2014-05-01
Despite the ray theory that is based on the high frequency assumption of the elastic wave-equation, the ray theory and the wave-equation simulation methods should be mutually proof of each other and hence jointly developed, but in fact parallel independent progressively. For this reason, in this paper we try an alternative way to mutually verify and test the computational accuracy and the solution correctness of both the ray theory (the multistage irregular shortest-path method) and the wave-equation simulation method (both the staggered finite difference method and the pseudo-spectral method) in anisotropic VTI and TTI media. Through the analysis and comparison of wavefield snapshot, common source gather profile and synthetic seismogram, it is able not only to verify the accuracy and correctness of each of the methods at least for kinematic features, but also to thoroughly understand the kinematic and dynamic features of the wave propagation in anisotropic media. The results show that both the staggered finite difference method and the pseudo-spectral method are able to yield the same results even for complex anisotropic media (such as a fault model); the multistage irregular shortest-path method is capable of predicting similar kinematic features as the wave-equation simulation method does, which can be used to mutually test each other for methodology accuracy and solution correctness. In addition, with the aid of the ray tracing results, it is easy to identify the multi-phases (or multiples) in the wavefield snapshot, common source point gather seismic section and synthetic seismogram predicted by the wave-equation simulation method, which is a key issue for later seismic application.
NASA Astrophysics Data System (ADS)
Lavender, Samantha; Brito, Fabrice; Aas, Christina; Casu, Francesco; Ribeiro, Rita; Farres, Jordi
2014-05-01
Data challenges are becoming the new method to promote innovation within data-intensive applications; building or evolving user communities and potentially developing sustainable commercial services. These can utilise the vast amount of information (both in scope and volume) that's available online, and profits from reduced processing costs. Data Challenges are also closely related to the recent paradigm shift towards e-Science, also referred to as "data-intensive science'. The E-CEO project aims to deliver a collaborative platform that, through Data Challenge Contests, will improve the adoption and outreach of new applications and methods to processes Earth Observation (EO) data. Underneath, the backbone must be a common environment where the applications can be developed, deployed and executed. Then, the results need to be easily published in a common visualization platform for their effective validation, evaluation and transparent peer comparisons. Contest #3 is based around the atmospheric correction (AC) of ocean colour data with a particular focus on the use of auxiliary data files for processing Level 1 (Top of Atmosphere, TOA, calibrated radiances/reflectances) to Level 2 products (Bottom of Atmosphere, BOA, calibrated radiances/reflectance and derived products). Scientific researchers commonly accept the auxiliary inputs that they've been provided with and/or use the climatological data that accompanies the processing software; often because it can be difficult to obtain multiple data sources and convert them into a format the software accepts. Therefore, it's proposed to compare various ocean colour AC approaches and in the process study the uncertainties associated with using different meteorological auxiliary products for the processing of Medium Resolution Imaging Spectrometer (MERIS) i.e. the sensitivity of different atmospheric correction input assumptions.
Comparing Error Correction Procedures for Children Diagnosed with Autism
ERIC Educational Resources Information Center
Townley-Cochran, Donna; Leaf, Justin B.; Leaf, Ronald; Taubman, Mitchell; McEachin, John
2017-01-01
The purpose of this study was to examine the effectiveness of two error correction (EC) procedures: modeling alone and the use of an error statement plus modeling. Utilizing an alternating treatments design nested into a multiple baseline design across participants, we sought to evaluate and compare the effects of these two EC procedures used to…
Correction for Guessing in the Framework of the 3PL Item Response Theory
ERIC Educational Resources Information Center
Chiu, Ting-Wei
2010-01-01
Guessing behavior is an important topic with regard to assessing proficiency on multiple choice tests, particularly for examinees at lower levels of proficiency due to greater the potential for systematic error or bias which that inflates observed test scores. Methods that incorporate a correction for guessing on high-stakes tests generally rely…
ERIC Educational Resources Information Center
Anderson, Paul S.
Initial experiences with computer-assisted reconsiderative scoring are described. Reconsiderative scoring occurs when student responses are received and reviewed by the teacher before points for correctness are assigned. Manually scored completion-style questions are reconsiderative. A new method of machine assistance produces an item analysis on…
ERIC Educational Resources Information Center
McMorrow, Martin J.; Foxx, R. M.
1986-01-01
The use of operant procedures was extended to decrease immediate echolalia and increase appropriate responding to questions of a 21-year-old autistic man. Multiple baseline designs demonstrated that echolalia was rapidly replaced with correct stimulus-specific responses. A variety of generalized improvements were observed in verbal responses to…
40 CFR 65.158 - Performance test procedures for control devices.
Code of Federal Regulations, 2011 CFR
2011-07-01
... simultaneously from multiple loading arms, each run shall represent at least one complete tank truck or tank car... the combustion air or as a secondary fuel into a boiler or process heater with a design capacity less... corrected to 3 percent oxygen if a combustion device is the control device. (A) The emission rate correction...
Common genetic variants in the 9p21 region and their associations with multiple tumours.
Gu, F; Pfeiffer, R M; Bhattacharjee, S; Han, S S; Taylor, P R; Berndt, S; Yang, H; Sigurdson, A J; Toro, J; Mirabello, L; Greene, M H; Freedman, N D; Abnet, C C; Dawsey, S M; Hu, N; Qiao, Y-L; Ding, T; Brenner, A V; Garcia-Closas, M; Hayes, R; Brinton, L A; Lissowska, J; Wentzensen, N; Kratz, C; Moore, L E; Ziegler, R G; Chow, W-H; Savage, S A; Burdette, L; Yeager, M; Chanock, S J; Chatterjee, N; Tucker, M A; Goldstein, A M; Yang, X R
2013-04-02
The chromosome 9p21.3 region has been implicated in the pathogenesis of multiple cancers. We systematically examined up to 203 tagging SNPs of 22 genes on 9p21.3 (19.9-32.8 Mb) in eight case-control studies: thyroid cancer, endometrial cancer (EC), renal cell carcinoma, colorectal cancer (CRC), colorectal adenoma (CA), oesophageal squamous cell carcinoma (ESCC), gastric cardia adenocarcinoma and osteosarcoma (OS). We used logistic regression to perform single SNP analyses for each study separately, adjusting for study-specific covariates. We combined SNP results across studies by fixed-effect meta-analyses and a newly developed subset-based statistical approach (ASSET). Gene-based P-values were obtained by the minP method using the Adaptive Rank Truncated Product program. We adjusted for multiple comparisons by Bonferroni correction. Rs3731239 in cyclin-dependent kinase inhibitors 2A (CDKN2A) was significantly associated with ESCC (P=7 × 10(-6)). The CDKN2A-ESCC association was further supported by gene-based analyses (Pgene=0.0001). In the meta-analyses by ASSET, four SNPs (rs3731239 in CDKN2A, rs615552 and rs573687 in CDKN2B and rs564398 in CDKN2BAS) showed significant associations with ESCC and EC (P<2.46 × 10(-4)). One SNP in MTAP (methylthioadenosine phosphorylase) (rs7023329) that was previously associated with melanoma and nevi in multiple genome-wide association studies was associated with CRC, CA and OS by ASSET (P=0.007). Our data indicate that genetic variants in CDKN2A, and possibly nearby genes, may be associated with ESCC and several other tumours, further highlighting the importance of 9p21.3 genetic variants in carcinogenesis.
An entropy correction method for unsteady full potential flows with strong shocks
NASA Technical Reports Server (NTRS)
Whitlow, W., Jr.; Hafez, M. M.; Osher, S. J.
1986-01-01
An entropy correction method for the unsteady full potential equation is presented. The unsteady potential equation is modified to account for entropy jumps across shock waves. The conservative form of the modified equation is solved in generalized coordinates using an implicit, approximate factorization method. A flux-biasing differencing method, which generates the proper amounts of artificial viscosity in supersonic regions, is used to discretize the flow equations in space. Comparisons between the present method and solutions of the Euler equations and between the present method and experimental data are presented. The comparisons show that the present method more accurately models solutions of the Euler equations and experiment than does the isentropic potential formulation.
Regularities in eyewitness identification.
Clark, Steven E; Howell, Ryan T; Davey, Sherrie L
2008-06-01
What do eyewitness identification experiments typically show? We address this question through a meta-analysis of 94 comparisons between target-present and target-absent lineups. The analyses showed that: (a) correct identifications and correct-nonidentifications were uncorrelated, (b) suspect identifications were more diagnostic with respect to the suspect's guilt or innocence than any other response, (c) nonidentifications were diagnostic of the suspect's innocence, (d) the diagnosticity of foil identifications depended on lineup composition, and (e) don't know responses were nondiagnostic with respect to guilt or innocence. Results of diagnosticity analyses for simultaneous and sequential lineups varied for full-sample versus direct-comparison analyses. Diagnosticity patterns also varied as a function of lineup composition. Theoretical, forensic, and legal implications are discussed.
Distortion Correction of OCT Images of the Crystalline Lens: GRIN Approach
Siedlecki, Damian; de Castro, Alberto; Gambra, Enrique; Ortiz, Sergio; Borja, David; Uhlhorn, Stephen; Manns, Fabrice; Marcos, Susana; Parel, Jean-Marie
2012-01-01
Purpose To propose a method to correct Optical Coherence Tomography (OCT) images of posterior surface of the crystalline lens incorporating its gradient index (GRIN) distribution and explore its possibilities for posterior surface shape reconstruction in comparison to existing methods of correction. Methods 2-D images of 9 human lenses were obtained with a time-domain OCT system. The shape of the posterior lens surface was corrected using the proposed iterative correction method. The parameters defining the GRIN distribution used for the correction were taken from a previous publication. The results of correction were evaluated relative to the nominal surface shape (accessible in vitro) and compared to the performance of two other existing methods (simple division, refraction correction: assuming a homogeneous index). Comparisons were made in terms of posterior surface radius, conic constant, root mean square, peak to valley and lens thickness shifts from the nominal data. Results Differences in the retrieved radius and conic constant were not statistically significant across methods. However, GRIN distortion correction with optimal shape GRIN parameters provided more accurate estimates of the posterior lens surface, in terms of RMS and peak values, with errors less than 6μm and 13μm respectively, on average. Thickness was also more accurately estimated with the new method, with a mean discrepancy of 8μm. Conclusions The posterior surface of the crystalline lens and lens thickness can be accurately reconstructed from OCT images, with the accuracy improving with an accurate model of the GRIN distribution. The algorithm can be used to improve quantitative knowledge of the crystalline lens from OCT imaging in vivo. Although the improvements over other methods are modest in 2-D, it is expected that 3-D imaging will fully exploit the potential of the technique. The method will also benefit from increasing experimental data of GRIN distribution in the lens of larger populations. PMID:22466105
Distortion correction of OCT images of the crystalline lens: gradient index approach.
Siedlecki, Damian; de Castro, Alberto; Gambra, Enrique; Ortiz, Sergio; Borja, David; Uhlhorn, Stephen; Manns, Fabrice; Marcos, Susana; Parel, Jean-Marie
2012-05-01
To propose a method to correct optical coherence tomography (OCT) images of posterior surface of the crystalline lens incorporating its gradient index (GRIN) distribution and explore its possibilities for posterior surface shape reconstruction in comparison to existing methods of correction. Two-dimensional images of nine human lenses were obtained with a time-domain OCT system. The shape of the posterior lens surface was corrected using the proposed iterative correction method. The parameters defining the GRIN distribution used for the correction were taken from a previous publication. The results of correction were evaluated relative to the nominal surface shape (accessible in vitro) and compared with the performance of two other existing methods (simple division, refraction correction: assuming a homogeneous index). Comparisons were made in terms of posterior surface radius, conic constant, root mean square, peak to valley, and lens thickness shifts from the nominal data. Differences in the retrieved radius and conic constant were not statistically significant across methods. However, GRIN distortion correction with optimal shape GRIN parameters provided more accurate estimates of the posterior lens surface in terms of root mean square and peak values, with errors <6 and 13 μm, respectively, on average. Thickness was also more accurately estimated with the new method, with a mean discrepancy of 8 μm. The posterior surface of the crystalline lens and lens thickness can be accurately reconstructed from OCT images, with the accuracy improving with an accurate model of the GRIN distribution. The algorithm can be used to improve quantitative knowledge of the crystalline lens from OCT imaging in vivo. Although the improvements over other methods are modest in two dimension, it is expected that three-dimensional imaging will fully exploit the potential of the technique. The method will also benefit from increasing experimental data of GRIN distribution in the lens of larger populations.
Prehn, Kristin; Lesemann, Anne; Krey, Georgia; Witte, A Veronica; Köbe, Theresa; Grittner, Ulrike; Flöel, Agnes
2017-08-23
Cardiovascular fitness is thought to exert beneficial effects on brain function and might delay the onset of cognitive decline. Empirical evidence of exercise-induced cognitive enhancement, however, has not been conclusive, possibly due to short intervention times in clinical trials. Resting-state functional connectivity (RSFC) has been proposed asan early indicator for intervention-induced changes. Here, we conducted a study in which healthy older overweight subjects took either part in a moderate aerobic exercise program over 6months (AE group; n=11) or control condition of non-aerobic stretching and toning (NAE group; n=18). While cognitive and gray matter volume changes were rather small (i.e., appeared only in certain sub-scores without Bonferroni correction for multiple comparisons or using small volume correction), we found significantly increased RSFC after training between dorsolateral prefrontal cortex and superior parietal gyrus/precuneus in the AE compared to the NAE group. This intervention study demonstrates an exercise-induced modulation of RSFC between key structures of the executive control and default mode networks, which might mediate an interaction between task-positive and task-negative brain activation required for task switching. Results further emphasize the value of RSFC asa sensitive biomarker for detecting early intervention-related cognitive improvements in clinical trials. Copyright © 2017 Elsevier Inc. All rights reserved.
Berman, Steven M.; London, Edythe D.; Morgan, Melinda; Rapkin, Andrea J.
2012-01-01
OBJECTIVE Premenstrual dysphoric disorder (PMDD) is characterized by severe, negative mood symptoms during the luteal phase of each menstrual cycle. We recently reported that women with PMDD show a greater increase in relative glucose metabolism in the posterior cerebellum from the follicular to the luteal phase, as compared with healthy women, and that the phase-related increase is proportional to PMDD symptom severity. We extended this work with a study of brain structure in PMDD. METHODS High-resolution magnetic resonance imaging (MRI) scans were obtained from 12 women with PMDD and 13 healthy control subjects (whole-brain volume-corrected p<.05). Voxel-based morphometry was used to assess group differences in cerebral grey-matter volume (GMV), using a statistical criterion of p<.05, correcting for multiple comparisons in the whole-brain volume. RESULTS PMDD subjects had greater GMV than controls in the posterior cerebellum but not in any other brain area. Age was negatively correlated with GMV within this region in healthy women, but not in women with PMDD. The group difference in GMV was significant for women over age 30 (p=.0002) but not younger participants (p>.1). CONCLUSIONS PMDD appears to be associated with reduced age-related loss in posterior cerebellar GMV. Although the mechanism underlying this finding is unclear, cumulative effects of symptom-related cerebellar activity may be involved. PMID:22868063
Further Evidence of the Association of the Diacylglycerol Kinase Kappa (DGKK) Gene With Hypospadias.
Hozyasz, Kamil Konrad; Mostowska, Adrianna; Kowal, Andrzej; Mydlak, Dariusz; Tsibulski, Alexander; Jagodzinski, Pawel P
2018-02-18
Hypospadias is a common developmental anomaly of the male external genitalia. In previous studies conducted on West European, Californian, and Han Chinese populations the relationship between polymorphic variants of the diacylglycerol kinase kappa (DGKK) gene and hypospadias have been reported. The aim was to study the possible associations between polymorphic variants of the DGKK gene and hypospadias using an independent sample of the Polish population. Ten single nucleotide polymorphisms in DGKK, which were reported to have an impact on the risk of hypospadias in other populations, were genotyped using high-resolution melting curve analysis in a group of 166 boys with isolated anterior (66%) and middle (34%) forms of hypospadias and 285 properly matched controls without congenital anomalies. Two DGKK variants rs11091748 and rs12171755 were associated with increased risk of hypospadias in the Polish population. These results were statistically significant, even after applying the Bonferroni correction for multiple comparisons (P < .005). All the tested nucleotide variants were involved in haplotype combinations associated with hypospadias. The global p-values for haplotypes comprising of rs4143304-rs11091748, rs11091748-rs17328236, rs1934179-rs4554617, rs1934183-rs1934179-rs4554617 and rs12171755-rs1934183-rs1934179-rs4554617 were statistically significant, even after the permutation test correction. Our study provides strong evidence of an association between DGKK nucleotide variants, haplotypes and hypospadias susceptibility.
Neutron elastic and inelastic cross section measurements for 28Si
NASA Astrophysics Data System (ADS)
Derdeyn, E. C.; Lyons, E. M.; Morin, T.; Hicks, S. F.; Vanhoy, J. R.; Peters, E. E.; Ramirez, A. P. D.; McEllistrem, M. T.; Mukhopadhyay, S.; Yates, S. W.
2017-09-01
Neutron elastic and inelastic cross sections are critical for design and implementation of nuclear reactors and reactor equipment. Silicon, an element used abundantly in fuel pellets as well as building materials, has little to no experimental cross sections in the fast neutron region to support current theoretical evaluations, and thus would benefit from any contribution. Measurements of neutron elastic and inelastic differential scattering cross sections for 28Si were performed at the University of Kentucky Accelerator Laboratory for incident neutron energies of 6.1 MeV and 7.0 MeV. Neutrons were produced by accelerated deuterons incident on a deuterium gas cell. These nearly mono-energetic neutrons then scattered off a natural Si sample and were detected using liquid deuterated benzene scintillation detectors. Scattered neutron energy was deduced using time-of-flight techniques in tandem with kinematic calculations for an angular distribution. The relative detector efficiency was experimentally determined over a neutron energy range from approximately 0.5 to 7.75 MeV prior to the experiment. Yields were corrected for multiple scattering and neutron attenuation in the sample using the forced-collision Monte Carlo correction code MULCAT. Resulting cross sections will be presented along with comparisons to various data evaluations. Research is supported by USDOE-NNSA-SSAP: NA0002931, NSF: PHY-1606890, and the Donald A. Cowan Physics Institute at the University of Dallas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mantry, Sonny; Petriello, Frank
We derive a factorization theorem for the Higgs boson transverse momentum (p{sub T}) and rapidity (Y) distributions at hadron colliders, using the soft-collinear effective theory (SCET), for m{sub h}>>p{sub T}>>{Lambda}{sub QCD}, where m{sub h} denotes the Higgs mass. In addition to the factorization of the various scales involved, the perturbative physics at the p{sub T} scale is further factorized into two collinear impact-parameter beam functions (IBFs) and an inverse soft function (ISF). These newly defined functions are of a universal nature for the study of differential distributions at hadron colliders. The additional factorization of the p{sub T}-scale physics simplifies themore » implementation of higher order radiative corrections in {alpha}{sub s}(p{sub T}). We derive formulas for factorization in both momentum and impact parameter space and discuss the relationship between them. Large logarithms of the relevant scales in the problem are summed using the renormalization group equations of the effective theories. Power corrections to the factorization theorem in p{sub T}/m{sub h} and {Lambda}{sub QCD}/p{sub T} can be systematically derived. We perform multiple consistency checks on our factorization theorem including a comparison with known fixed-order QCD results. We compare the SCET factorization theorem with the Collins-Soper-Sterman approach to low-p{sub T} resummation.« less
NASA Technical Reports Server (NTRS)
Whiteman, D. N.; Evans, K. D.; DiGirolamo, P.; Demoz, B. B.; Turner, D.; Comstock, J.; Ismail, S.; Ferrare, R. A.; Browell, E. V.; Goldsmith, J. E. M.;
2002-01-01
The NASA/GSFC Scanning Raman Lidar (SRL) was deployed to the Southern Great Plains CART site from September - December, 2000 and participated in two field campaigns devoted to comparisons of various water vapor measurement technologies and calibrations. These campaigns were the Water Vapor Intensive Operations Period 2000 (WVIOP2000) and the ARM FIRE Water Vapor Experiment (AFWEX). WVIOP2000 was devoted to validating water vapor measurements in the lower atmosphere while AFWEX had similar goals but for measurements in the upper troposphere. The SRL was significantly upgraded both optically and electronically prior to these field campaigns. These upgrades enabled the SRL to demonstrate the highest resolution lidar measurements of water vapor ever acquired during the nighttime and the highest S/N Raman lidar measurements of water vapor in the daytime; more than a factor of 2 increase in S/N versus the DOE CARL Raman Lidar. Examples of these new measurement capabilities along with comparisons of SRL and CARL, LASE, MPI-DIAL, in-situ sensors, radiosonde, and others will be presented. The profile comparisons of the SRL and CARL have revealed what appears to be an overlap correction or countrate correction problem in CARL. This may be involved in an overall dry bias in the precipitable water calibration of CARL with respect to the MWR of approx. 4%. Preliminary analysis indicates that the application of a temperature dependent correction to the narrowband Raman lidar measurements of water vapor improves the lidar/Vaisala radiosonde comparisons of upper tropospheric water vapor. Other results including the comparison of the first-ever simultaneous measurements from four water vapor lidar systems, a bore-wave event captured at high resolution by the SRL and cirrus cloud optical depth studies using the SRL and CARL will be presented at the meeting.
NASA Technical Reports Server (NTRS)
Walker, Eric L.
2005-01-01
Wind tunnel experiments will continue to be a primary source of validation data for many types of mathematical and computational models in the aerospace industry. The increased emphasis on accuracy of data acquired from these facilities requires understanding of the uncertainty of not only the measurement data but also any correction applied to the data. One of the largest and most critical corrections made to these data is due to wall interference. In an effort to understand the accuracy and suitability of these corrections, a statistical validation process for wall interference correction methods has been developed. This process is based on the use of independent cases which, after correction, are expected to produce the same result. Comparison of these independent cases with respect to the uncertainty in the correction process establishes a domain of applicability based on the capability of the method to provide reasonable corrections with respect to customer accuracy requirements. The statistical validation method was applied to the version of the Transonic Wall Interference Correction System (TWICS) recently implemented in the National Transonic Facility at NASA Langley Research Center. The TWICS code generates corrections for solid and slotted wall interference in the model pitch plane based on boundary pressure measurements. Before validation could be performed on this method, it was necessary to calibrate the ventilated wall boundary condition parameters. Discrimination comparisons are used to determine the most representative of three linear boundary condition models which have historically been used to represent longitudinally slotted test section walls. Of the three linear boundary condition models implemented for ventilated walls, the general slotted wall model was the most representative of the data. The TWICS code using the calibrated general slotted wall model was found to be valid to within the process uncertainty for test section Mach numbers less than or equal to 0.60. The scatter among the mean corrected results of the bodies of revolution validation cases was within one count of drag on a typical transport aircraft configuration for Mach numbers at or below 0.80 and two counts of drag for Mach numbers at or below 0.90.
Hypothesis Testing Using Factor Score Regression: A Comparison of Four Methods
ERIC Educational Resources Information Center
Devlieger, Ines; Mayer, Axel; Rosseel, Yves
2016-01-01
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and…