Sample records for wise error rate

  1. Type-II generalized family-wise error rate formulas with application to sample size determination.

    PubMed

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Simultaneous Control of Error Rates in fMRI Data Analysis

    PubMed Central

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-01-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730

  3. Analysis of the “naming game” with learning errors in communications

    NASA Astrophysics Data System (ADS)

    Lou, Yang; Chen, Guanrong

    2015-07-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  4. Analysis of the "naming game" with learning errors in communications.

    PubMed

    Lou, Yang; Chen, Guanrong

    2015-07-16

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  5. 40 CFR 258.53 - Ground-water sampling and analysis requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... include consistent sampling and analysis procedures that are designed to ensure monitoring results that... testing period. If a multiple comparisons procedure is used, the Type I experiment wise error rate for...

  6. Statistical testing and power analysis for brain-wide association study.

    PubMed

    Gong, Weikang; Wan, Lin; Lu, Wenlian; Ma, Liang; Cheng, Fan; Cheng, Wei; Grünewald, Stefan; Feng, Jianfeng

    2018-04-05

    The identification of connexel-wise associations, which involves examining functional connectivities between pairwise voxels across the whole brain, is both statistically and computationally challenging. Although such a connexel-wise methodology has recently been adopted by brain-wide association studies (BWAS) to identify connectivity changes in several mental disorders, such as schizophrenia, autism and depression, the multiple correction and power analysis methods designed specifically for connexel-wise analysis are still lacking. Therefore, we herein report the development of a rigorous statistical framework for connexel-wise significance testing based on the Gaussian random field theory. It includes controlling the family-wise error rate (FWER) of multiple hypothesis testings using topological inference methods, and calculating power and sample size for a connexel-wise study. Our theoretical framework can control the false-positive rate accurately, as validated empirically using two resting-state fMRI datasets. Compared with Bonferroni correction and false discovery rate (FDR), it can reduce false-positive rate and increase statistical power by appropriately utilizing the spatial information of fMRI data. Importantly, our method bypasses the need of non-parametric permutation to correct for multiple comparison, thus, it can efficiently tackle large datasets with high resolution fMRI images. The utility of our method is shown in a case-control study. Our approach can identify altered functional connectivities in a major depression disorder dataset, whereas existing methods fail. A software package is available at https://github.com/weikanggong/BWAS. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. 40 CFR 257.23 - Ground-water sampling and analysis requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... and analysis procedures that are designed to ensure monitoring results that provide an accurate... procedure is used, the Type I experiment wise error rate for each testing period shall be no less than 0.05...

  8. Testing for clustering at many ranges inflates family-wise error rate (FWE).

    PubMed

    Loop, Matthew Shane; McClure, Leslie A

    2015-01-15

    Testing for clustering at multiple ranges within a single dataset is a common practice in spatial epidemiology. It is not documented whether this approach has an impact on the type 1 error rate. We estimated the family-wise error rate (FWE) for the difference in Ripley's K functions test, when testing at an increasing number of ranges at an alpha-level of 0.05. Case and control locations were generated from a Cox process on a square area the size of the continental US (≈3,000,000 mi2). Two thousand Monte Carlo replicates were used to estimate the FWE with 95% confidence intervals when testing for clustering at one range, as well as 10, 50, and 100 equidistant ranges. The estimated FWE and 95% confidence intervals when testing 10, 50, and 100 ranges were 0.22 (0.20 - 0.24), 0.34 (0.31 - 0.36), and 0.36 (0.34 - 0.38), respectively. Testing for clustering at multiple ranges within a single dataset inflated the FWE above the nominal level of 0.05. Investigators should construct simultaneous critical envelopes (available in spatstat package in R), or use a test statistic that integrates the test statistics from each range, as suggested by the creators of the difference in Ripley's K functions test.

  9. Consistency-based rectification of nonrigid registrations

    PubMed Central

    Gass, Tobias; Székely, Gábor; Goksel, Orcun

    2015-01-01

    Abstract. We present a technique to rectify nonrigid registrations by improving their group-wise consistency, which is a widely used unsupervised measure to assess pair-wise registration quality. While pair-wise registration methods cannot guarantee any group-wise consistency, group-wise approaches typically enforce perfect consistency by registering all images to a common reference. However, errors in individual registrations to the reference then propagate, distorting the mean and accumulating in the pair-wise registrations inferred via the reference. Furthermore, the assumption that perfect correspondences exist is not always true, e.g., for interpatient registration. The proposed consistency-based registration rectification (CBRR) method addresses these issues by minimizing the group-wise inconsistency of all pair-wise registrations using a regularized least-squares algorithm. The regularization controls the adherence to the original registration, which is additionally weighted by the local postregistration similarity. This allows CBRR to adaptively improve consistency while locally preserving accurate pair-wise registrations. We show that the resulting registrations are not only more consistent, but also have lower average transformation error when compared to known transformations in simulated data. On clinical data, we show improvements of up to 50% target registration error in breathing motion estimation from four-dimensional MRI and improvements in atlas-based segmentation quality of up to 65% in terms of mean surface distance in three-dimensional (3-D) CT. Such improvement was observed consistently using different registration algorithms, dimensionality (two-dimensional/3-D), and modalities (MRI/CT). PMID:26158083

  10. Reduced Integrity of Right Lateralized White Matter in Patients with Primary Insomnia: A Diffusion-Tensor Imaging Study.

    PubMed

    Li, Shumei; Tian, Junzhang; Bauer, Andreas; Huang, Ruiwang; Wen, Hua; Li, Meng; Wang, Tianyue; Xia, Likun; Jiang, Guihua

    2016-08-01

    Purpose To analyze the integrity of white matter (WM) tracts in primary insomnia patients and provide better characterization of abnormal WM integrity and its relationship with disease duration and clinical features of primary insomnia. Materials and Methods This prospective study was approved by the ethics committee of the Guangdong No. 2 Provincial People's Hospital. Tract-based spatial statistics were used to compare changes in diffusion parameters of WM tracts from 23 primary insomnia patients and 30 healthy control (HC) participants, and the accuracy of these changes in distinguishing insomnia patients from HC participants was evaluated. Voxel-wise statistics across subjects was performed by using a 5000-permutation set with family-wise error correction (family-wise error, P < .05). Multiple regressions were used to analyze the associations between the abnormal fractional anisotropy (FA) in WM with disease duration, Pittsburgh Sleep Quality Index, insomnia severity index, self-rating anxiety scale, and the self-rating depression scale in primary insomnia. Characteristics for abnormal WM were also investigated in tract-level analyses. Results Primary insomnia patients had lower FA values mainly in the right anterior limb of the internal capsule, right posterior limb of the internal capsule, right anterior corona radiata, right superior corona radiata, right superior longitudinal fasciculus, body of the corpus callosum, and right thalamus (P < .05, family-wise error correction). The receiver operating characteristic areas for the seven regions were acceptable (range, 0.60-0.74; 60%-74%). Multiple regression models showed abnormal FA values in the thalamus and body corpus callosum were associated with the disease duration, self-rating depression scale, and Pittsburgh Sleep Quality Index scores. Tract-level analysis suggested that the reduced FA values might be related to greater radial diffusivity. Conclusion This study showed that WM tracts related to regulation of sleep and wakefulness, and limbic cognitive and sensorimotor regions, are disrupted in the right brain in patients with primary insomnia. The reduced integrity of these WM tracts may be because of loss of myelination. (©) RSNA, 2016.

  11. An empirical examination of WISE/NEOWISE asteroid analysis and results

    NASA Astrophysics Data System (ADS)

    Myhrvold, Nathan

    2017-10-01

    Observations made by the WISE space telescope and subsequent analysis by the NEOWISE project represent the largest corpus of asteroid data to date, describing the diameter, albedo, and other properties of the ~164,000 asteroids in the collection. I present a critical reanalysis of the WISE observational data, and NEOWISE results published in numerous papers and in the JPL Planetary Data System (PDS). This analysis reveals shortcomings and a lack of clarity, both in the original analysis and in the presentation of results. The procedures used to generate NEOWISE results fall short of established thermal modelling standards. Rather than using a uniform protocol, 10 modelling methods were applied to 12 combinations of WISE band data. Over half the NEOWISE results are based on a single band of data. Most NEOWISE curve fits are poor quality, frequently missing many or all the data points. About 30% of the single-band results miss all the data; 43% of the results derived from the most common multiple-band combinations miss all the data in at least one band. The NEOWISE data processing procedures rely on inconsistent assumptions, and introduce bias by systematically discarding much of the original data. I show that error estimates for the WISE observational data have a true uncertainty factor of ~1.2 to 1.9 times larger than previously described, and that the error estimates do not fit a normal distribution. These issues call into question the validity of the NEOWISE Monte-Carlo error analysis. Comparing published NEOWISE diameters to published estimates using radar, occultation, or spacecraft measurements (ROS) reveals 150 for which the NEOWISE diameters were copied exactly from the ROS source. My findings show that the accuracy of diameter estimates for NEOWISE results depend heavily on the choice of data bands and model. Systematic errors in the diameter estimates are much larger than previously described. Systematic errors for diameters in the PDS range from -3% to +27%. Random errors range from -14% to +19% when using all four WISE bands, and from -45% to +74% in cases using only the W2 band. The results presented here show that much work remains to be done towards understanding asteroid data from WISE/NEOWISE.

  12. Methods as Tools: A Response to O'Keefe.

    ERIC Educational Resources Information Center

    Hewes, Dean E.

    2003-01-01

    Tries to distinguish the key insights from some distortions by clarifying the goals of experiment-wise error control that D. O'Keefe correctly identifies as vague and open to misuse. Concludes that a better understanding of the goal of experiment-wise error correction erases many of these "absurdities," but the clarifications necessary…

  13. Correction of Microplate Data from High-Throughput Screening.

    PubMed

    Wang, Yuhong; Huang, Ruili

    2016-01-01

    High-throughput screening (HTS) makes it possible to collect cellular response data from a large number of cell lines and small molecules in a timely and cost-effective manner. The errors and noises in the microplate-formatted data from HTS have unique characteristics, and they can be generally grouped into three categories: run-wise (temporal, multiple plates), plate-wise (background pattern, single plate), and well-wise (single well). In this chapter, we describe a systematic solution for identifying and correcting such errors and noises, mainly basing on pattern recognition and digital signal processing technologies.

  14. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  15. False Discovery Control in Large-Scale Spatial Multiple Testing

    PubMed Central

    Sun, Wenguang; Reich, Brian J.; Cai, T. Tony; Guindani, Michele; Schwartzman, Armin

    2014-01-01

    Summary This article develops a unified theoretical and computational framework for false discovery control in multiple testing of spatial signals. We consider both point-wise and cluster-wise spatial analyses, and derive oracle procedures which optimally control the false discovery rate, false discovery exceedance and false cluster rate, respectively. A data-driven finite approximation strategy is developed to mimic the oracle procedures on a continuous spatial domain. Our multiple testing procedures are asymptotically valid and can be effectively implemented using Bayesian computational algorithms for analysis of large spatial data sets. Numerical results show that the proposed procedures lead to more accurate error control and better power performance than conventional methods. We demonstrate our methods for analyzing the time trends in tropospheric ozone in eastern US. PMID:25642138

  16. Inhibitory saccadic dysfunction is associated with cerebellar injury in multiple sclerosis.

    PubMed

    Kolbe, Scott C; Kilpatrick, Trevor J; Mitchell, Peter J; White, Owen; Egan, Gary F; Fielding, Joanne

    2014-05-01

    Cognitive dysfunction is common in patients with multiple sclerosis (MS). Saccadic eye movement paradigms such as antisaccades (AS) can sensitively interrogate cognitive function, in particular, the executive and attentional processes of response selection and inhibition. Although we have previously demonstrated significant deficits in the generation of AS in MS patients, the neuropathological changes underlying these deficits were not elucidated. In this study, 24 patients with relapsing-remitting MS underwent testing using an AS paradigm. Rank correlation and multiple regression analyses were subsequently used to determine whether AS errors in these patients were associated with: (i) neurological and radiological abnormalities, as measured by standard clinical techniques, (ii) cognitive dysfunction, and (iii) regionally specific cerebral white and gray-matter damage. Although AS error rates in MS patients did not correlate with clinical disability (using the Expanded Disability Status Score), T2 lesion load or brain parenchymal fraction, AS error rate did correlate with performance on the Paced Auditory Serial Addition Task and the Symbol Digit Modalities Test, neuropsychological tests commonly used in MS. Further, voxel-wise regression analyses revealed associations between AS errors and reduced fractional anisotropy throughout most of the cerebellum, and increased mean diffusivity in the cerebellar vermis. Region-wise regression analyses confirmed that AS errors also correlated with gray-matter atrophy in the cerebellum right VI subregion. These results support the use of the AS paradigm as a marker for cognitive dysfunction in MS and implicate structural and microstructural changes to the cerebellum as a contributing mechanism for AS deficits in these patients. Copyright © 2013 Wiley Periodicals, Inc.

  17. Step-wise refolding of recombinant proteins.

    PubMed

    Tsumoto, Kouhei; Arakawa, Tsutomu; Chen, Linda

    2010-04-01

    Protein refolding is still on trial-and-error basis. Here we describe step-wise dialysis refolding, in which denaturant concentration is altered in step-wise fashion. This technology controls the folding pathway by adjusting the concentrations of the denaturant and other solvent additives to induce sequential folding or disulfide formation.

  18. The interval testing procedure: A general framework for inference in functional data analysis.

    PubMed

    Pini, Alessia; Vantini, Simone

    2016-09-01

    We introduce in this work the Interval Testing Procedure (ITP), a novel inferential technique for functional data. The procedure can be used to test different functional hypotheses, e.g., distributional equality between two or more functional populations, equality of mean function of a functional population to a reference. ITP involves three steps: (i) the representation of data on a (possibly high-dimensional) functional basis; (ii) the test of each possible set of consecutive basis coefficients; (iii) the computation of the adjusted p-values associated to each basis component, by means of a new strategy here proposed. We define a new type of error control, the interval-wise control of the family wise error rate, particularly suited for functional data. We show that ITP is provided with such a control. A simulation study comparing ITP with other testing procedures is reported. ITP is then applied to the analysis of hemodynamical features involved with cerebral aneurysm pathology. ITP is implemented in the fdatest R package. © 2016, The International Biometric Society.

  19. Statistical aspects of the TNK-S2B trial of tenecteplase versus alteplase in acute ischemic stroke: an efficient, dose-adaptive, seamless phase II/III design.

    PubMed

    Levin, Bruce; Thompson, John L P; Chakraborty, Bibhas; Levy, Gilberto; MacArthur, Robert; Haley, E Clarke

    2011-08-01

    TNK-S2B, an innovative, randomized, seamless phase II/III trial of tenecteplase versus rt-PA for acute ischemic stroke, terminated for slow enrollment before regulatory approval of use of phase II patients in phase III. (1) To review the trial design and comprehensive type I error rate simulations and (2) to discuss issues raised during regulatory review, to facilitate future approval of similar designs. In phase II, an early (24-h) outcome and adaptive sequential procedure selected one of three tenecteplase doses for phase III comparison with rt-PA. Decision rules comparing this dose to rt-PA would cause stopping for futility at phase II end, or continuation to phase III. Phase III incorporated two co-primary hypotheses, allowing for a treatment effect at either end of the trichotomized Rankin scale. Assuming no early termination, four interim analyses and one final analysis of 1908 patients provided an experiment-wise type I error rate of <0.05. Over 1,000 distribution scenarios, each involving 40,000 replications, the maximum type I error in phase III was 0.038. Inflation from the dose selection was more than offset by the one-half continuity correction in the test statistics. Inflation from repeated interim analyses was more than offset by the reduction from the clinical stopping rules for futility at the first interim analysis. Design complexity and evolving regulatory requirements lengthened the review process. (1) The design was innovative and efficient. Per protocol, type I error was well controlled for the co-primary phase III hypothesis tests, and experiment-wise. (2a) Time must be allowed for communications with regulatory reviewers from first design stages. (2b) Adequate type I error control must be demonstrated. (2c) Greater clarity is needed on (i) whether this includes demonstration of type I error control if the protocol is violated and (ii) whether simulations of type I error control are acceptable. (2d) Regulatory agency concerns that protocols for futility stopping may not be followed may be allayed by submitting interim analysis results to them as these analyses occur.

  20. Cluster size statistic and cluster mass statistic: two novel methods for identifying changes in functional connectivity between groups or conditions.

    PubMed

    Ing, Alex; Schwarzbauer, Christian

    2014-01-01

    Functional connectivity has become an increasingly important area of research in recent years. At a typical spatial resolution, approximately 300 million connections link each voxel in the brain with every other. This pattern of connectivity is known as the functional connectome. Connectivity is often compared between experimental groups and conditions. Standard methods used to control the type 1 error rate are likely to be insensitive when comparisons are carried out across the whole connectome, due to the huge number of statistical tests involved. To address this problem, two new cluster based methods--the cluster size statistic (CSS) and cluster mass statistic (CMS)--are introduced to control the family wise error rate across all connectivity values. These methods operate within a statistical framework similar to the cluster based methods used in conventional task based fMRI. Both methods are data driven, permutation based and require minimal statistical assumptions. Here, the performance of each procedure is evaluated in a receiver operator characteristic (ROC) analysis, utilising a simulated dataset. The relative sensitivity of each method is also tested on real data: BOLD (blood oxygen level dependent) fMRI scans were carried out on twelve subjects under normal conditions and during the hypercapnic state (induced through the inhalation of 6% CO2 in 21% O2 and 73%N2). Both CSS and CMS detected significant changes in connectivity between normal and hypercapnic states. A family wise error correction carried out at the individual connection level exhibited no significant changes in connectivity.

  1. Cluster Size Statistic and Cluster Mass Statistic: Two Novel Methods for Identifying Changes in Functional Connectivity Between Groups or Conditions

    PubMed Central

    Ing, Alex; Schwarzbauer, Christian

    2014-01-01

    Functional connectivity has become an increasingly important area of research in recent years. At a typical spatial resolution, approximately 300 million connections link each voxel in the brain with every other. This pattern of connectivity is known as the functional connectome. Connectivity is often compared between experimental groups and conditions. Standard methods used to control the type 1 error rate are likely to be insensitive when comparisons are carried out across the whole connectome, due to the huge number of statistical tests involved. To address this problem, two new cluster based methods – the cluster size statistic (CSS) and cluster mass statistic (CMS) – are introduced to control the family wise error rate across all connectivity values. These methods operate within a statistical framework similar to the cluster based methods used in conventional task based fMRI. Both methods are data driven, permutation based and require minimal statistical assumptions. Here, the performance of each procedure is evaluated in a receiver operator characteristic (ROC) analysis, utilising a simulated dataset. The relative sensitivity of each method is also tested on real data: BOLD (blood oxygen level dependent) fMRI scans were carried out on twelve subjects under normal conditions and during the hypercapnic state (induced through the inhalation of 6% CO2 in 21% O2 and 73%N2). Both CSS and CMS detected significant changes in connectivity between normal and hypercapnic states. A family wise error correction carried out at the individual connection level exhibited no significant changes in connectivity. PMID:24906136

  2. POWER-ENHANCED MULTIPLE DECISION FUNCTIONS CONTROLLING FAMILY-WISE ERROR AND FALSE DISCOVERY RATES.

    PubMed

    Peña, Edsel A; Habiger, Joshua D; Wu, Wensong

    2011-02-01

    Improved procedures, in terms of smaller missed discovery rates (MDR), for performing multiple hypotheses testing with weak and strong control of the family-wise error rate (FWER) or the false discovery rate (FDR) are developed and studied. The improvement over existing procedures such as the Šidák procedure for FWER control and the Benjamini-Hochberg (BH) procedure for FDR control is achieved by exploiting possible differences in the powers of the individual tests. Results signal the need to take into account the powers of the individual tests and to have multiple hypotheses decision functions which are not limited to simply using the individual p -values, as is the case, for example, with the Šidák, Bonferroni, or BH procedures. They also enhance understanding of the role of the powers of individual tests, or more precisely the receiver operating characteristic (ROC) functions of decision processes, in the search for better multiple hypotheses testing procedures. A decision-theoretic framework is utilized, and through auxiliary randomizers the procedures could be used with discrete or mixed-type data or with rank-based nonparametric tests. This is in contrast to existing p -value based procedures whose theoretical validity is contingent on each of these p -value statistics being stochastically equal to or greater than a standard uniform variable under the null hypothesis. Proposed procedures are relevant in the analysis of high-dimensional "large M , small n " data sets arising in the natural, physical, medical, economic and social sciences, whose generation and creation is accelerated by advances in high-throughput technology, notably, but not limited to, microarray technology.

  3. An empirical examination of WISE/NEOWISE asteroid analysis and results

    NASA Astrophysics Data System (ADS)

    Myhrvold, Nathan

    2018-11-01

    Asteroid observations by the WISE space telescope and the analysis of those observations by the NEOWISE project have provided more information about the diameter, albedo, and other properties of approximately 164,000 asteroids, more than all other sources combined. The raw data set from this mission will likely be the largest and most important such data on asteroids available for many years. To put this trove of data to productive use, we must understand its strengths and weaknesses, and we need clear and reproducible methods for analyzing the data set. This study critically examines the WISE observational data and the NEOWISE results published in both the original papers and the NASA Planetary Data System (PDS). There seem to be multiple areas where the analysis might benefit from improvement or independent verification. The NEOWISE results were obtained by the application of 10 different modeling methods, many of which are not adequately explained or even defined, to 12 different combinations of WISE band data. More than half of NEOWISE results are based on a single band of data. The majority of curve fits to the data in the NEOWISE results are of poor quality, frequently missing most or all of the data points on which they are based. Complete misses occur for about 30% of single-band results, and among the results derived from the most common multiple-band combinations, about 43% miss all data points in at least one band. The NEOWISE data analysis relies on assumptions that are in many cases inconsistent with each other. A substantial fraction of WISE data was systematically excluded from the NEOWISE analysis. Building on methods developed by Hanuš et al. (2015), I show that error estimates for the WISE observational data were not well characterized, and all observations have true uncertainty at least a factor of 1.3-2.5 times larger than previously described, depending on the band. I also show that the error distribution is not well fit by a normal distribution. These findings are important because the Monte Carlo error-analysis method used by the NEOWISE project depends on both the observational errors and the normal distribution. An empirical comparison of published NEOWISE diameters to those in the literature that were estimated by using radar, occultation, or spacecraft (ROS) measurements shows that, for 129 results involving 105 asteroids, the NEOWISE diameters presented in tables of thermal-modeling results exactly match prior ROS results from the literature. While these are only a tiny fraction (0.06%) of the asteroids analyzed, they are important because they represent the only independent check on NEOWISE diameter accuracy. After removing the exact matches and adding additional ROS results, I find that the accuracy of diameter estimates for NEOWISE results depends strongly on the choice of data bands and on which of the 10 models was used. I show that systematic errors in the diameter estimates are much larger than previously described and range from - 5% to + 23%. In addition, random errors range from - 15% to + 19% when all four WISE bands were used, and from - 39% to + 57% in cases employing only the W2 band. The empirical results presented here show that much work remains to be done in analyzing data from the WISE/NEOWISE mission and interpreting it for asteroid science.

  4. Simple method to verify OPC data based on exposure condition

    NASA Astrophysics Data System (ADS)

    Moon, James; Ahn, Young-Bae; Oh, Sey-Young; Nam, Byung-Ho; Yim, Dong Gyu

    2006-03-01

    In a world where Sub100nm lithography tool is an everyday household item for device makers, shrinkage of the device is at a rate that no one ever have imagined. With the shrinkage of device at such a high rate, demand placed on Optical Proximity Correction (OPC) is like never before. To meet this demand with respect to shrinkage rate of the device, more aggressive OPC tactic is involved. Aggressive OPC tactics is a must for sub 100nm lithography tech but this tactic eventually results in greater room for OPC error and complexity of the OPC data. Until now, Optical Rule Check (ORC) or Design Rule Check (DRC) was used to verify this complex OPC error. But each of these methods has its pros and cons. ORC verification of OPC data is rather accurate "process" wise but inspection of full chip device requires a lot of money (Computer , software,..) and patience (run time). DRC however has no such disadvantage, but accuracy of the verification is a total downfall "process" wise. In this study, we were able to create a new method for OPC data verification that combines the best of both ORC and DRC verification method. We created a method that inspects the biasing of the OPC data with respect to the illumination condition of the process that's involved. This new method for verification was applied to 80nm tech ISOLATION and GATE layer of the 512M DRAM device and showed accuracy equivalent to ORC inspection with run time that of DRC verification.

  5. Robust group-wise rigid registration of point sets using t-mixture model

    NASA Astrophysics Data System (ADS)

    Ravikumar, Nishant; Gooya, Ali; Frangi, Alejandro F.; Taylor, Zeike A.

    2016-03-01

    A probabilistic framework for robust, group-wise rigid alignment of point-sets using a mixture of Students t-distribution especially when the point sets are of varying lengths, are corrupted by an unknown degree of outliers or in the presence of missing data. Medical images (in particular magnetic resonance (MR) images), their segmentations and consequently point-sets generated from these are highly susceptible to corruption by outliers. This poses a problem for robust correspondence estimation and accurate alignment of shapes, necessary for training statistical shape models (SSMs). To address these issues, this study proposes to use a t-mixture model (TMM), to approximate the underlying joint probability density of a group of similar shapes and align them to a common reference frame. The heavy-tailed nature of t-distributions provides a more robust registration framework in comparison to state of the art algorithms. Significant reduction in alignment errors is achieved in the presence of outliers, using the proposed TMM-based group-wise rigid registration method, in comparison to its Gaussian mixture model (GMM) counterparts. The proposed TMM-framework is compared with a group-wise variant of the well-known Coherent Point Drift (CPD) algorithm and two other group-wise methods using GMMs, using both synthetic and real data sets. Rigid alignment errors for groups of shapes are quantified using the Hausdorff distance (HD) and quadratic surface distance (QSD) metrics.

  6. Testing of the ABBN-RF multigroup data library in photon transport calculations

    NASA Astrophysics Data System (ADS)

    Koscheev, Vladimir; Lomakov, Gleb; Manturov, Gennady; Tsiboulia, Anatoly

    2017-09-01

    Gamma radiation is produced via both of nuclear fuel and shield materials. Photon interaction is known with appropriate accuracy, but secondary gamma ray production known much less. The purpose of this work is studying secondary gamma ray production data from neutron induced reactions in iron and lead by using MCNP code and modern nuclear data as ROSFOND, ENDF/B-7.1, JEFF-3.2 and JENDL-4.0. Results of calculations show that all of these nuclear data have different photon production data from neutron induced reactions and have poor agreement with evaluated benchmark experiment. The ABBN-RF multigroup cross-section library is based on the ROSFOND data. It presented in two forms of micro cross sections: ABBN and MATXS formats. Comparison of group-wise calculations using both ABBN and MATXS data to point-wise calculations with the ROSFOND library shows a good agreement. The discrepancies between calculation and experimental C/E results in neutron spectra are in the limit of experimental errors. For the photon spectrum they are out of experimental errors. Results of calculations using group-wise and point-wise representation of cross sections show a good agreement both for photon and neutron spectra.

  7. Coordinate based random effect size meta-analysis of neuroimaging studies.

    PubMed

    Tench, C R; Tanasescu, Radu; Constantinescu, C S; Auer, D P; Cottam, W J

    2017-06-01

    Low power in neuroimaging studies can make them difficult to interpret, and Coordinate based meta-analysis (CBMA) may go some way to mitigating this issue. CBMA has been used in many analyses to detect where published functional MRI or voxel-based morphometry studies testing similar hypotheses report significant summary results (coordinates) consistently. Only the reported coordinates and possibly t statistics are analysed, and statistical significance of clusters is determined by coordinate density. Here a method of performing coordinate based random effect size meta-analysis and meta-regression is introduced. The algorithm (ClusterZ) analyses both coordinates and reported t statistic or Z score, standardised by the number of subjects. Statistical significance is determined not by coordinate density, but by a random effects meta-analyses of reported effects performed cluster-wise using standard statistical methods and taking account of censoring inherent in the published summary results. Type 1 error control is achieved using the false cluster discovery rate (FCDR), which is based on the false discovery rate. This controls both the family wise error rate under the null hypothesis that coordinates are randomly drawn from a standard stereotaxic space, and the proportion of significant clusters that are expected under the null. Such control is necessary to avoid propagating and even amplifying the very issues motivating the meta-analysis in the first place. ClusterZ is demonstrated on both numerically simulated data and on real data from reports of grey matter loss in multiple sclerosis (MS) and syndromes suggestive of MS, and of painful stimulus in healthy controls. The software implementation is available to download and use freely. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. VizieR Online Data Catalog: Catalogue enriched with R CrB stars (Tisserand, 2012)

    NASA Astrophysics Data System (ADS)

    Tisserand, P.

    2012-01-01

    For each object, the equatorial and Galactic coordinates are given, as well as all four WISE (Wright et al., 2010AJ....140.1868W, Cat. II/307), three 2MASS (Skrutskie et al., 2006, Cat. VII/233), three DENIS (Epchtein et al., 1994Ap&SS.217....3E, Cat. B/denis) magnitudes, and their related 1-sigma errors. The five USNO-B1 magnitudes (Monet et al., 2003, Cat. I/284) are also listed, but not the individual measurement error since they were not delivered in the original catalogue. If one magnitude was not available, its value was replaced by the number -99. Also, if more than one epoch were available in the DENIS or USNO-B1 catalogues for one particular object, only the epoch related to the brightest magnitudes were kept. The last column of the catalogue gives the SIMBAD classification, as of July 2011, found using a 5 arcsec matching radius. An underscore character was given to the objects that had no classification in SIMBAD. WISE data acquisition and reduction are discussed in Wright et al. (2010AJ....140.1868W) and in the Explanatory Supplement to the WISE Preliminary Data Release Products. There are four WISE bands, with central wavelengths at 3.4, 4.6, 12, and 22um. (2 data files).

  9. The Space-Wise Global Gravity Model from GOCE Nominal Mission Data

    NASA Astrophysics Data System (ADS)

    Gatti, A.; Migliaccio, F.; Reguzzoni, M.; Sampietro, D.; Sanso, F.

    2011-12-01

    In the framework of the GOCE data analysis, the space-wise approach implements a multi-step collocation solution for the estimation of a global geopotential model in terms of spherical harmonic coefficients and their error covariance matrix. The main idea is to use the collocation technique to exploit the spatial correlation of the gravity field in the GOCE data reduction. In particular the method consists of an along-track Wiener filter, a collocation gridding at satellite altitude and a spherical harmonic analysis by integration. All these steps are iterated, also to account for the rotation between local orbital and gradiometer reference frame. Error covariances are computed by Montecarlo simulations. The first release of the space-wise approach was presented at the ESA Living Planet Symposium in July 2010. This model was based on only two months of GOCE data and partially contained a priori information coming from other existing gravity models, especially at low degrees and low orders. A second release was distributed after the 4th International GOCE User Workshop in May 2011. In this solution, based on eight months of GOCE data, all the dependencies from external gravity information were removed thus giving rise to a GOCE-only space-wise model. However this model showed an over-regularization at the highest degrees of the spherical harmonic expansion due to the combination technique of intermediate solutions (based on about two months of data). In this work a new space-wise solution is presented. It is based on all nominal mission data from November 2009 to mid April 2011, and its main novelty is that the intermediate solutions are now computed in such a way to avoid over-regularization in the final solution. Beyond the spherical harmonic coefficients of the global model and their error covariance matrix, the space-wise approach is able to deliver as by-products a set of spherical grids of potential and of its second derivatives at mean satellite altitude. These grids have an information content that is very similar to the original along-orbit data, but they are much easier to handle. In addition they are estimated by local least-squares collocation and therefore, although computed by a unique global covariance function, they could yield more information at local level than the spherical harmonic coefficients of the global model. For this reason these grids seem to be useful for local geophysical investigations. The estimated grids with their estimated errors are presented in this work together with proposals on possible future improvements. A test to compare the different information contents of the along-orbit data, the gridded data and the spherical harmonic coefficients is also shown.

  10. Efficiently characterizing the total error in quantum circuits

    NASA Astrophysics Data System (ADS)

    Carignan-Dugas, Arnaud; Wallman, Joel J.; Emerson, Joseph

    A promising technological advancement meant to enlarge our computational means is the quantum computer. Such a device would harvest the quantum complexity of the physical world in order to unfold concrete mathematical problems more efficiently. However, the errors emerging from the implementation of quantum operations are likewise quantum, and hence share a similar level of intricacy. Fortunately, randomized benchmarking protocols provide an efficient way to characterize the operational noise within quantum devices. The resulting figures of merit, like the fidelity and the unitarity, are typically attached to a set of circuit components. While important, this doesn't fulfill the main goal: determining if the error rate of the total circuit is small enough in order to trust its outcome. In this work, we fill the gap by providing an optimal bound on the total fidelity of a circuit in terms of component-wise figures of merit. Our bound smoothly interpolates between the classical regime, in which the error rate grows linearly in the circuit's length, and the quantum regime, which can naturally allow quadratic growth. Conversely, our analysis substantially improves the bounds on single circuit element fidelities obtained through techniques such as interleaved randomized benchmarking. This research was supported by the U.S. Army Research Office through Grant W911NF- 14-1-0103, CIFAR, the Government of Ontario, and the Government of Canada through NSERC and Industry Canada.

  11. When the Single Matters more than the Group (II): Addressing the Problem of High False Positive Rates in Single Case Voxel Based Morphometry Using Non-parametric Statistics.

    PubMed

    Scarpazza, Cristina; Nichols, Thomas E; Seramondi, Donato; Maumet, Camille; Sartori, Giuseppe; Mechelli, Andrea

    2016-01-01

    In recent years, an increasing number of studies have used Voxel Based Morphometry (VBM) to compare a single patient with a psychiatric or neurological condition of interest against a group of healthy controls. However, the validity of this approach critically relies on the assumption that the single patient is drawn from a hypothetical population with a normal distribution and variance equal to that of the control group. In a previous investigation, we demonstrated that family-wise false positive error rate (i.e., the proportion of statistical comparisons yielding at least one false positive) in single case VBM are much higher than expected (Scarpazza et al., 2013). Here, we examine whether the use of non-parametric statistics, which does not rely on the assumptions of normal distribution and equal variance, would enable the investigation of single subjects with good control of false positive risk. We empirically estimated false positive rates (FPRs) in single case non-parametric VBM, by performing 400 statistical comparisons between a single disease-free individual and a group of 100 disease-free controls. The impact of smoothing (4, 8, and 12 mm) and type of pre-processing (Modulated, Unmodulated) was also examined, as these factors have been found to influence FPRs in previous investigations using parametric statistics. The 400 statistical comparisons were repeated using two independent, freely available data sets in order to maximize the generalizability of the results. We found that the family-wise error rate was 5% for increases and 3.6% for decreases in one data set; and 5.6% for increases and 6.3% for decreases in the other data set (5% nominal). Further, these results were not dependent on the level of smoothing and modulation. Therefore, the present study provides empirical evidence that single case VBM studies with non-parametric statistics are not susceptible to high false positive rates. The critical implication of this finding is that VBM can be used to characterize neuroanatomical alterations in individual subjects as long as non-parametric statistics are employed.

  12. Accounting for Non-Gaussian Sources of Spatial Correlation in Parametric Functional Magnetic Resonance Imaging Paradigms I: Revisiting Cluster-Based Inferences.

    PubMed

    Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Sathian, K

    2018-02-01

    In a recent study, Eklund et al. employed resting-state functional magnetic resonance imaging data as a surrogate for null functional magnetic resonance imaging (fMRI) datasets and posited that cluster-wise family-wise error (FWE) rate-corrected inferences made by using parametric statistical methods in fMRI studies over the past two decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; this was principally because the spatial autocorrelation functions (sACF) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggested otherwise. Here, we show that accounting for non-Gaussian signal components such as those arising from resting-state neural activity as well as physiological responses and motion artifacts in the null fMRI datasets yields first- and second-level general linear model analysis residuals with nearly uniform and Gaussian sACF. Further comparison with nonparametric permutation tests indicates that cluster-based FWE corrected inferences made with Gaussian spatial noise approximations are valid.

  13. Multiple Hypothesis Testing for Experimental Gingivitis Based on Wilcoxon Signed Rank Statistics

    PubMed Central

    Preisser, John S.; Sen, Pranab K.; Offenbacher, Steven

    2011-01-01

    Dental research often involves repeated multivariate outcomes on a small number of subjects for which there is interest in identifying outcomes that exhibit change in their levels over time as well as to characterize the nature of that change. In particular, periodontal research often involves the analysis of molecular mediators of inflammation for which multivariate parametric methods are highly sensitive to outliers and deviations from Gaussian assumptions. In such settings, nonparametric methods may be favored over parametric ones. Additionally, there is a need for statistical methods that control an overall error rate for multiple hypothesis testing. We review univariate and multivariate nonparametric hypothesis tests and apply them to longitudinal data to assess changes over time in 31 biomarkers measured from the gingival crevicular fluid in 22 subjects whereby gingivitis was induced by temporarily withholding tooth brushing. To identify biomarkers that can be induced to change, multivariate Wilcoxon signed rank tests for a set of four summary measures based upon area under the curve are applied for each biomarker and compared to their univariate counterparts. Multiple hypothesis testing methods with choice of control of the false discovery rate or strong control of the family-wise error rate are examined. PMID:21984957

  14. Step - wise transient method - Influence of heat source inertia

    NASA Astrophysics Data System (ADS)

    Malinarič, Svetozár; Dieška, Peter

    2016-07-01

    Step-wise transient (SWT) method is an experimental technique for measuring the thermal diffusivity and conductivity of materials. Theoretical models and experimental apparatus are presented and the influence of the heat source capacity are investigated using the experiment simulation. The specimens from low density polyethylene (LDPE) were measured yielding the thermal diffusivity 0.165 mm2/s and thermal conductivity 0.351 W/mK with the coefficient of variation less than 1.4 %. The heat source capacity caused the systematic error of the results smaller than 1 %.

  15. Generation and tooth contact analysis of spiral bevel gears with predesigned parabolic functions of transmission errors

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Lee, Hong-Tao

    1989-01-01

    A new approach for determination of machine-tool settings for spiral bevel gears is proposed. The proposed settings provide a predesigned parabolic function of transmission errors and the desired location and orientation of the bearing contact. The predesigned parabolic function of transmission errors is able to absorb piece-wise linear functions of transmission errors that are caused by the gear misalignment and reduce gear noise. The gears are face-milled by head cutters with conical surfaces or surfaces of revolution. A computer program for simulation of meshing, bearing contact and determination of transmission errors for misaligned gear has been developed.

  16. A Generic Deep-Learning-Based Approach for Automated Surface Inspection.

    PubMed

    Ren, Ruoxu; Hung, Terence; Tan, Kay Chen

    2018-03-01

    Automated surface inspection (ASI) is a challenging task in industry, as collecting training dataset is usually costly and related methods are highly dataset-dependent. In this paper, a generic approach that requires small training data for ASI is proposed. First, this approach builds classifier on the features of image patches, where the features are transferred from a pretrained deep learning network. Next, pixel-wise prediction is obtained by convolving the trained classifier over input image. An experiment on three public and one industrial data set is carried out. The experiment involves two tasks: 1) image classification and 2) defect segmentation. The results of proposed algorithm are compared against several best benchmarks in literature. In the classification tasks, the proposed method improves accuracy by 0.66%-25.50%. In the segmentation tasks, the proposed method reduces error escape rates by 6.00%-19.00% in three defect types and improves accuracies by 2.29%-9.86% in all seven defect types. In addition, the proposed method achieves 0.0% error escape rate in the segmentation task of industrial data.

  17. Error-related processing following severe traumatic brain injury: An event-related functional magnetic resonance imaging (fMRI) study

    PubMed Central

    Sozda, Christopher N.; Larson, Michael J.; Kaufman, David A.S.; Schmalfuss, Ilona M.; Perlstein, William M.

    2011-01-01

    Continuous monitoring of one’s performance is invaluable for guiding behavior towards successful goal attainment by identifying deficits and strategically adjusting responses when performance is inadequate. In the present study, we exploited the advantages of event-related functional magnetic resonance imaging (fMRI) to examine brain activity associated with error-related processing after severe traumatic brain injury (sTBI). fMRI and behavioral data were acquired while 10 sTBI participants and 12 neurologically-healthy controls performed a task-switching cued-Stroop task. fMRI data were analyzed using a random-effects whole-brain voxel-wise general linear model and planned linear contrasts. Behaviorally, sTBI patients showed greater error-rate interference than neurologically-normal controls. fMRI data revealed that, compared to controls, sTBI patients showed greater magnitude error-related activation in the anterior cingulate cortex (ACC) and an increase in the overall spatial extent of error-related activation across cortical and subcortical regions. Implications for future research and potential limitations in conducting fMRI research in neurologically-impaired populations are discussed, as well as some potential benefits of employing multimodal imaging (e.g., fMRI and event-related potentials) of cognitive control processes in TBI. PMID:21756946

  18. Error-related processing following severe traumatic brain injury: an event-related functional magnetic resonance imaging (fMRI) study.

    PubMed

    Sozda, Christopher N; Larson, Michael J; Kaufman, David A S; Schmalfuss, Ilona M; Perlstein, William M

    2011-10-01

    Continuous monitoring of one's performance is invaluable for guiding behavior towards successful goal attainment by identifying deficits and strategically adjusting responses when performance is inadequate. In the present study, we exploited the advantages of event-related functional magnetic resonance imaging (fMRI) to examine brain activity associated with error-related processing after severe traumatic brain injury (sTBI). fMRI and behavioral data were acquired while 10 sTBI participants and 12 neurologically-healthy controls performed a task-switching cued-Stroop task. fMRI data were analyzed using a random-effects whole-brain voxel-wise general linear model and planned linear contrasts. Behaviorally, sTBI patients showed greater error-rate interference than neurologically-normal controls. fMRI data revealed that, compared to controls, sTBI patients showed greater magnitude error-related activation in the anterior cingulate cortex (ACC) and an increase in the overall spatial extent of error-related activation across cortical and subcortical regions. Implications for future research and potential limitations in conducting fMRI research in neurologically-impaired populations are discussed, as well as some potential benefits of employing multimodal imaging (e.g., fMRI and event-related potentials) of cognitive control processes in TBI. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Automatic Video Analysis for Obstructive Sleep Apnea Diagnosis.

    PubMed

    Abad, Jorge; Muñoz-Ferrer, Aida; Cervantes, Miguel Ángel; Esquinas, Cristina; Marin, Alicia; Martínez, Carlos; Morera, Josep; Ruiz, Juan

    2016-08-01

    We investigated the diagnostic accuracy for the identification of obstructive sleep apnea (OSA) and its severity of a noninvasive technology based on image processing (SleepWise). This is an observational, prospective study to evaluate the degree of agreement between polysomnography (PSG) and SleepWise. We recruited 56 consecutive subjects with suspected OSA who were referred as outpatients to the Sleep Unit of the Hospital Universitari Germans Trias i Pujol (HUGTiP) from January 2013 to January 2014. All patients underwent laboratory PSG and image processing with SleepWise simultaneously the same night. Both PSG and SleepWise analyses were carried independently and blindly. We analyzed 50 of the 56 patients recruited. OSA was diagnosed through PSG in a total of 44 patients (88%) with a median apnea-hypopnea index (AHI) of 25.35 (24.9). According to SleepWise, 45 patients (90%) met the criteria for a diagnosis of OSA, with a median AHI of 22.8 (22.03). An analysis of the ability of PSG and SleepWise to classify patients by severity on the basis of their AHI shows that the two diagnostic systems distribute the different groups similarly. According to PSG, 23 patients (46%) had a diagnosis of severe OSA, 11 patients (22%) moderate OSA, and 10 patients (20%) mild OSA. According to SleepWise, 20, 13, and 12 patients (40%, 26%, and 24%, respectively) had a diagnosis of severe, moderate, and mild OSA respectively. For OSA diagnosis, SleepWise was found to have sensitivity of 100% and specificity of 83% in relation to PSG. The positive predictive value was 97% and the negative predictive value was 100%. The Bland-Altman plot comparing the mean AHI values obtained through PSG and SleepWise shows very good agreement between the two diagnostic techniques, with a bias of -3.85, a standard error of 12.18, and a confidence interval of -0.39 to -7.31. SleepWise was reasonably accurate for noninvasive and automatic diagnosis of OSA in outpatients. SleepWise determined the severity of OSA with high reliability. The current study including simultaneous laboratory PSG and SleepWise processing image is proposed as a reasonable validation standard. © 2016 Associated Professional Sleep Societies, LLC.

  20. WiseView: Visualizing motion and variability of faint WISE sources

    NASA Astrophysics Data System (ADS)

    Caselden, Dan; Westin, Paul, III; Meisner, Aaron; Kuchner, Marc; Colin, Guillaume

    2018-06-01

    WiseView renders image blinks of Wide-field Infrared Survey Explorer (WISE) coadds spanning a multi-year time baseline in a browser. The software allows for easy visual identification of motion and variability for sources far beyond the single-frame detection limit, a key threshold not surmounted by many studies. WiseView transparently gathers small image cutouts drawn from many terabytes of unWISE coadds, facilitating access to this large and unique dataset. Users need only input the coordinates of interest and can interactively tune parameters including the image stretch, colormap and blink rate. WiseView was developed in the context of the Backyard Worlds: Planet 9 citizen science project, and has enabled hundreds of brown dwarf candidate discoveries by citizen scientists and professional astronomers.

  1. Adaptive control of nonlinear uncertain active suspension systems with prescribed performance.

    PubMed

    Huang, Yingbo; Na, Jing; Wu, Xing; Liu, Xiaoqin; Guo, Yu

    2015-01-01

    This paper proposes adaptive control designs for vehicle active suspension systems with unknown nonlinear dynamics (e.g., nonlinear spring and piece-wise linear damper dynamics). An adaptive control is first proposed to stabilize the vertical vehicle displacement and thus to improve the ride comfort and to guarantee other suspension requirements (e.g., road holding and suspension space limitation) concerning the vehicle safety and mechanical constraints. An augmented neural network is developed to online compensate for the unknown nonlinearities, and a novel adaptive law is developed to estimate both NN weights and uncertain model parameters (e.g., sprung mass), where the parameter estimation error is used as a leakage term superimposed on the classical adaptations. To further improve the control performance and simplify the parameter tuning, a prescribed performance function (PPF) characterizing the error convergence rate, maximum overshoot and steady-state error is used to propose another adaptive control. The stability for the closed-loop system is proved and particular performance requirements are analyzed. Simulations are included to illustrate the effectiveness of the proposed control schemes. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Stochastic Surface Mesh Reconstruction

    NASA Astrophysics Data System (ADS)

    Ozendi, M.; Akca, D.; Topan, H.

    2018-05-01

    A generic and practical methodology is presented for 3D surface mesh reconstruction from the terrestrial laser scanner (TLS) derived point clouds. It has two main steps. The first step deals with developing an anisotropic point error model, which is capable of computing the theoretical precisions of 3D coordinates of each individual point in the point cloud. The magnitude and direction of the errors are represented in the form of error ellipsoids. The following second step is focused on the stochastic surface mesh reconstruction. It exploits the previously determined error ellipsoids by computing a point-wise quality measure, which takes into account the semi-diagonal axis length of the error ellipsoid. The points only with the least errors are used in the surface triangulation. The remaining ones are automatically discarded.

  3. On Test-Wiseness and Some Related Constructs

    ERIC Educational Resources Information Center

    Nilsson, Ingvar; Wedman, Ingemar

    1976-01-01

    States that, due to confusion of concepts and lack of systemization, "previous studies are often difficult to interpret and consequently...afford little possibility of formulating more precise statements about those errors the concepts represent...." A proposal for systematization is presented. (Author/RW)

  4. Piece-wise quadratic approximations of arbitrary error functions for fast and robust machine learning.

    PubMed

    Gorban, A N; Mirkes, E M; Zinovyev, A

    2016-12-01

    Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0

  5. Building dynamic population graph for accurate correspondence detection.

    PubMed

    Du, Shaoyi; Guo, Yanrong; Sanroma, Gerard; Ni, Dong; Wu, Guorong; Shen, Dinggang

    2015-12-01

    In medical imaging studies, there is an increasing trend for discovering the intrinsic anatomical difference across individual subjects in a dataset, such as hand images for skeletal bone age estimation. Pair-wise matching is often used to detect correspondences between each individual subject and a pre-selected model image with manually-placed landmarks. However, the large anatomical variability across individual subjects can easily compromise such pair-wise matching step. In this paper, we present a new framework to simultaneously detect correspondences among a population of individual subjects, by propagating all manually-placed landmarks from a small set of model images through a dynamically constructed image graph. Specifically, we first establish graph links between models and individual subjects according to pair-wise shape similarity (called as forward step). Next, we detect correspondences for the individual subjects with direct links to any of model images, which is achieved by a new multi-model correspondence detection approach based on our recently-published sparse point matching method. To correct those inaccurate correspondences, we further apply an error detection mechanism to automatically detect wrong correspondences and then update the image graph accordingly (called as backward step). After that, all subject images with detected correspondences are included into the set of model images, and the above two steps of graph expansion and error correction are repeated until accurate correspondences for all subject images are established. Evaluations on real hand X-ray images demonstrate that our proposed method using a dynamic graph construction approach can achieve much higher accuracy and robustness, when compared with the state-of-the-art pair-wise correspondence detection methods as well as a similar method but using static population graph. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. The importance of group-wise registration in tract based spatial statistics study of neurodegeneration: a simulation study in Alzheimer's disease.

    PubMed

    Keihaninejad, Shiva; Ryan, Natalie S; Malone, Ian B; Modat, Marc; Cash, David; Ridgway, Gerard R; Zhang, Hui; Fox, Nick C; Ourselin, Sebastien

    2012-01-01

    Tract-based spatial statistics (TBSS) is a popular method for the analysis of diffusion tensor imaging data. TBSS focuses on differences in white matter voxels with high fractional anisotropy (FA), representing the major fibre tracts, through registering all subjects to a common reference and the creation of a FA skeleton. This work considers the effect of choice of reference in the TBSS pipeline, which can be a standard template, an individual subject from the study, a study-specific template or a group-wise average. While TBSS attempts to overcome registration error by searching the neighbourhood perpendicular to the FA skeleton for the voxel with maximum FA, this projection step may not compensate for large registration errors that might occur in the presence of pathology such as atrophy in neurodegenerative diseases. This makes registration performance and choice of reference an important issue. Substantial work in the field of computational anatomy has shown the use of group-wise averages to reduce biases while avoiding the arbitrary selection of a single individual. Here, we demonstrate the impact of the choice of reference on: (a) specificity (b) sensitivity in a simulation study and (c) a real-world comparison of Alzheimer's disease patients to controls. In (a) and (b), simulated deformations and decreases in FA were applied to control subjects to simulate changes of shape and WM integrity similar to what would be seen in AD patients, in order to provide a "ground truth" for evaluating the various methods of TBSS reference. Using a group-wise average atlas as the reference outperformed other references in the TBSS pipeline in all evaluations.

  7. The Importance of Group-Wise Registration in Tract Based Spatial Statistics Study of Neurodegeneration: A Simulation Study in Alzheimer's Disease

    PubMed Central

    Keihaninejad, Shiva; Ryan, Natalie S.; Malone, Ian B.; Modat, Marc; Cash, David; Ridgway, Gerard R.; Zhang, Hui; Fox, Nick C.; Ourselin, Sebastien

    2012-01-01

    Tract-based spatial statistics (TBSS) is a popular method for the analysis of diffusion tensor imaging data. TBSS focuses on differences in white matter voxels with high fractional anisotropy (FA), representing the major fibre tracts, through registering all subjects to a common reference and the creation of a FA skeleton. This work considers the effect of choice of reference in the TBSS pipeline, which can be a standard template, an individual subject from the study, a study-specific template or a group-wise average. While TBSS attempts to overcome registration error by searching the neighbourhood perpendicular to the FA skeleton for the voxel with maximum FA, this projection step may not compensate for large registration errors that might occur in the presence of pathology such as atrophy in neurodegenerative diseases. This makes registration performance and choice of reference an important issue. Substantial work in the field of computational anatomy has shown the use of group-wise averages to reduce biases while avoiding the arbitrary selection of a single individual. Here, we demonstrate the impact of the choice of reference on: (a) specificity (b) sensitivity in a simulation study and (c) a real-world comparison of Alzheimer's disease patients to controls. In (a) and (b), simulated deformations and decreases in FA were applied to control subjects to simulate changes of shape and WM integrity similar to what would be seen in AD patients, in order to provide a “ground truth” for evaluating the various methods of TBSS reference. Using a group-wise average atlas as the reference outperformed other references in the TBSS pipeline in all evaluations. PMID:23139736

  8. High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.

    PubMed

    Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong

    2018-08-01

    This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.

  9. Combination of GRACE monthly gravity field solutions from different processing strategies

    NASA Astrophysics Data System (ADS)

    Jean, Yoomin; Meyer, Ulrich; Jäggi, Adrian

    2018-02-01

    We combine the publicly available GRACE monthly gravity field time series to produce gravity fields with reduced systematic errors. We first compare the monthly gravity fields in the spatial domain in terms of signal and noise. Then, we combine the individual gravity fields with comparable signal content, but diverse noise characteristics. We test five different weighting schemes: equal weights, non-iterative coefficient-wise, order-wise, or field-wise weights, and iterative field-wise weights applying variance component estimation (VCE). The combined solutions are evaluated in terms of signal and noise in the spectral and spatial domains. Compared to the individual contributions, they in general show lower noise. In case the noise characteristics of the individual solutions differ significantly, the weighted means are less noisy, compared to the arithmetic mean: The non-seasonal variability over the oceans is reduced by up to 7.7% and the root mean square (RMS) of the residuals of mass change estimates within Antarctic drainage basins is reduced by 18.1% on average. The field-wise weighting schemes in general show better performance, compared to the order- or coefficient-wise weighting schemes. The combination of the full set of considered time series results in lower noise levels, compared to the combination of a subset consisting of the official GRACE Science Data System gravity fields only: The RMS of coefficient-wise anomalies is smaller by up to 22.4% and the non-seasonal variability over the oceans by 25.4%. This study was performed in the frame of the European Gravity Service for Improved Emergency Management (EGSIEM; http://www.egsiem.eu) project. The gravity fields provided by the EGSIEM scientific combination service (ftp://ftp.aiub.unibe.ch/EGSIEM/) are combined, based on the weights derived by VCE as described in this article.

  10. Multi-arm group sequential designs with a simultaneous stopping rule.

    PubMed

    Urach, S; Posch, M

    2016-12-30

    Multi-arm group sequential clinical trials are efficient designs to compare multiple treatments to a control. They allow one to test for treatment effects already in interim analyses and can have a lower average sample number than fixed sample designs. Their operating characteristics depend on the stopping rule: We consider simultaneous stopping, where the whole trial is stopped as soon as for any of the arms the null hypothesis of no treatment effect can be rejected, and separate stopping, where only recruitment to arms for which a significant treatment effect could be demonstrated is stopped, but the other arms are continued. For both stopping rules, the family-wise error rate can be controlled by the closed testing procedure applied to group sequential tests of intersection and elementary hypotheses. The group sequential boundaries for the separate stopping rule also control the family-wise error rate if the simultaneous stopping rule is applied. However, we show that for the simultaneous stopping rule, one can apply improved, less conservative stopping boundaries for local tests of elementary hypotheses. We derive corresponding improved Pocock and O'Brien type boundaries as well as optimized boundaries to maximize the power or average sample number and investigate the operating characteristics and small sample properties of the resulting designs. To control the power to reject at least one null hypothesis, the simultaneous stopping rule requires a lower average sample number than the separate stopping rule. This comes at the cost of a lower power to reject all null hypotheses. Some of this loss in power can be regained by applying the improved stopping boundaries for the simultaneous stopping rule. The procedures are illustrated with clinical trials in systemic sclerosis and narcolepsy. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  11. VizieR Online Data Catalog: WISE/NEOWISE Mars-crossing asteroids (Ali-Lagoa+, 2017)

    NASA Astrophysics Data System (ADS)

    Ali-Lagoa, V.; Delbo, M.

    2017-07-01

    We fitted the near-Earth asteroid thermal model of Harris (1998, Icarus, 131, 29) to WISE/NEOWISE thermal infrared data (see, e.g., Mainzer et al. 2011ApJ...736..100M, and Masiero et al. 2014, Cat. J/ApJ/791/121). The table contains the best-fitting values of size and beaming parameter. We note that the beaming parameter is a strictly positive quantity, but a negative sign is given to indicate whenever we could not fit it and had to assume a default value. We also provide the visible geometric albedos computed from the diameter and the tabulated absolute magnitudes. Minimum relative errors of 10, 15, and 20 percent should be considered for size, beaming parameter and albedo in those cases for which the beaming parameter could be fitted. Otherwise, the minimum relative errors in size and albedo increase to 20 and 40 percent (see, e.g., Mainzer et al. 2011ApJ...736..100M). The asteroid absolute magnitudes and slope parameters retrieved from the Minor Planet Center (MPC) are included, as well as the number of observations used in each WISE band (nW2, nW3, nW4) and the corresponding average values of heliocentric and geocentric distances and phase angle of the observations. The ephemerides were retrieved from the MIRIADE service (http://vo.imcce.fr/webservices/miriade/?ephemph). (1 data file).

  12. Improved estimation of parametric images of cerebral glucose metabolic rate from dynamic FDG-PET using volume-wise principle component analysis

    NASA Astrophysics Data System (ADS)

    Dai, Xiaoqian; Tian, Jie; Chen, Zhe

    2010-03-01

    Parametric images can represent both spatial distribution and quantification of the biological and physiological parameters of tracer kinetics. The linear least square (LLS) method is a well-estimated linear regression method for generating parametric images by fitting compartment models with good computational efficiency. However, bias exists in LLS-based parameter estimates, owing to the noise present in tissue time activity curves (TTACs) that propagates as correlated error in the LLS linearized equations. To address this problem, a volume-wise principal component analysis (PCA) based method is proposed. In this method, firstly dynamic PET data are properly pre-transformed to standardize noise variance as PCA is a data driven technique and can not itself separate signals from noise. Secondly, the volume-wise PCA is applied on PET data. The signals can be mostly represented by the first few principle components (PC) and the noise is left in the subsequent PCs. Then the noise-reduced data are obtained using the first few PCs by applying 'inverse PCA'. It should also be transformed back according to the pre-transformation method used in the first step to maintain the scale of the original data set. Finally, the obtained new data set is used to generate parametric images using the linear least squares (LLS) estimation method. Compared with other noise-removal method, the proposed method can achieve high statistical reliability in the generated parametric images. The effectiveness of the method is demonstrated both with computer simulation and with clinical dynamic FDG PET study.

  13. Numerical and experimental investigation of enhancement of heat transfer in dimpled rib heat exchanger tube

    NASA Astrophysics Data System (ADS)

    Kumar, Anil; Maithani, Rajesh; Suri, Amar Raj Singh

    2017-12-01

    In this study, numerical and experimental investigation has been carried out for a range of system and operating parameters in order to analyse the effect of dimpled rib on heat and fluid flow behaviours in heat exchanger tube. Tube has, stream wise spacing ( x/ d d ) range of 15-35, span wise spacing ( y/ d d ) range of 15-35, ratio of dimpled depth to print diameter ( e/ d d ) of 1.0 and Reynolds number ( Re n ) ranges from 4000 to 28,000. Simulations were carried out to obtain heat and fluid flow behaviour of smooth and rough tube, using commercial CFD software, ANSYS 16.0 (Fluent). Renormalization k - ɛ model was employed to assess the influence of dimpled on turbulent flow and velocity field. Simulation results show that, the enhancement of 3.18 times in heat transfer and 2.87 times enhancement in thermal hydraulic performance as a function of stream wise direction ( x/ d d ) of 15 and span wise direction ( y/ d d ) of 15 respectively. Comparison between numerical and experimental simulation results showed that good agreement as the data fell within ±10% error band.

  14. Adverse effects of metallic artifacts on voxel-wise analysis and tract-based spatial statistics in diffusion tensor imaging.

    PubMed

    Goto, Masami; Abe, Osamu; Hata, Junichi; Fukunaga, Issei; Shimoji, Keigo; Kunimatsu, Akira; Gomi, Tsutomu

    2017-02-01

    Background Diffusion tensor imaging (DTI) is a magnetic resonance imaging (MRI) technique that reflects the Brownian motion of water molecules constrained within brain tissue. Fractional anisotropy (FA) is one of the most commonly measured DTI parameters, and can be applied to quantitative analysis of white matter as tract-based spatial statistics (TBSS) and voxel-wise analysis. Purpose To show an association between metallic implants and the results of statistical analysis (voxel-wise group comparison and TBSS) for fractional anisotropy (FA) mapping, in DTI of healthy adults. Material and Methods Sixteen healthy volunteers were scanned with 3-Tesla MRI. A magnetic keeper type of dental implant was used as the metallic implant. DTI was acquired three times in each participant: (i) without a magnetic keeper (FAnon1); (ii) with a magnetic keeper (FAimp); and (iii) without a magnetic keeper (FAnon2) as reproducibility of FAnon1. Group comparisons with paired t-test were performed as FAnon1 vs. FAnon2, and as FAnon1 vs. FAimp. Results Regions of significantly reduced and increased local FA values were revealed by voxel-wise group comparison analysis (a P value of less than 0.05, corrected with family-wise error), but not by TBSS. Conclusion Metallic implants existing outside the field of view produce artifacts that affect the statistical analysis (voxel-wise group comparisons) for FA mapping. When statistical analysis for FA mapping is conducted by researchers, it is important to pay attention to any dental implants present in the mouths of the participants.

  15. Visualization and statistical comparisons of microbial communities using R packages on Phylochip data.

    PubMed

    Holmes, Susan; Alekseyenko, Alexander; Timme, Alden; Nelson, Tyrrell; Pasricha, Pankaj Jay; Spormann, Alfred

    2011-01-01

    This article explains the statistical and computational methodology used to analyze species abundances collected using the LNBL Phylochip in a study of Irritable Bowel Syndrome (IBS) in rats. Some tools already available for the analysis of ordinary microarray data are useful in this type of statistical analysis. For instance in correcting for multiple testing we use Family Wise Error rate control and step-down tests (available in the multtest package). Once the most significant species are chosen we use the hypergeometric tests familiar for testing GO categories to test specific phyla and families. We provide examples of normalization, multivariate projections, batch effect detection and integration of phylogenetic covariation, as well as tree equalization and robustification methods.

  16. Meta-Analysis and the Solomon Four-Group Design.

    ERIC Educational Resources Information Center

    Sawilowsky, Shlomo; And Others

    1994-01-01

    A Monte Carlo study considers the use of meta analysis with the Solomon four-group design. Experiment-wise Type I error properties and the relative power properties of Stouffer's Z in the Solomon four-group design are explored. Obstacles to conducting meta analysis in the Solomon design are discussed. (SLD)

  17. Automatic anatomy recognition in whole-body PET/CT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Huiqian; Udupa, Jayaram K., E-mail: jay@mail.med.upenn.edu; Odhner, Dewey

    Purpose: Whole-body positron emission tomography/computed tomography (PET/CT) has become a standard method of imaging patients with various disease conditions, especially cancer. Body-wide accurate quantification of disease burden in PET/CT images is important for characterizing lesions, staging disease, prognosticating patient outcome, planning treatment, and evaluating disease response to therapeutic interventions. However, body-wide anatomy recognition in PET/CT is a critical first step for accurately and automatically quantifying disease body-wide, body-region-wise, and organwise. This latter process, however, has remained a challenge due to the lower quality of the anatomic information portrayed in the CT component of this imaging modality and the paucity ofmore » anatomic details in the PET component. In this paper, the authors demonstrate the adaptation of a recently developed automatic anatomy recognition (AAR) methodology [Udupa et al., “Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images,” Med. Image Anal. 18, 752–771 (2014)] to PET/CT images. Their goal was to test what level of object localization accuracy can be achieved on PET/CT compared to that achieved on diagnostic CT images. Methods: The authors advance the AAR approach in this work in three fronts: (i) from body-region-wise treatment in the work of Udupa et al. to whole body; (ii) from the use of image intensity in optimal object recognition in the work of Udupa et al. to intensity plus object-specific texture properties, and (iii) from the intramodality model-building-recognition strategy to the intermodality approach. The whole-body approach allows consideration of relationships among objects in different body regions, which was previously not possible. Consideration of object texture allows generalizing the previous optimal threshold-based fuzzy model recognition method from intensity images to any derived fuzzy membership image, and in the process, to bring performance to the level achieved on diagnostic CT and MR images in body-region-wise approaches. The intermodality approach fosters the use of already existing fuzzy models, previously created from diagnostic CT images, on PET/CT and other derived images, thus truly separating the modality-independent object assembly anatomy from modality-specific tissue property portrayal in the image. Results: Key ways of combining the above three basic ideas lead them to 15 different strategies for recognizing objects in PET/CT images. Utilizing 50 diagnostic CT image data sets from the thoracic and abdominal body regions and 16 whole-body PET/CT image data sets, the authors compare the recognition performance among these 15 strategies on 18 objects from the thorax, abdomen, and pelvis in object localization error and size estimation error. Particularly on texture membership images, object localization is within three voxels on whole-body low-dose CT images and 2 voxels on body-region-wise low-dose images of known true locations. Surprisingly, even on direct body-region-wise PET images, localization error within 3 voxels seems possible. Conclusions: The previous body-region-wise approach can be extended to whole-body torso with similar object localization performance. Combined use of image texture and intensity property yields the best object localization accuracy. In both body-region-wise and whole-body approaches, recognition performance on low-dose CT images reaches levels previously achieved on diagnostic CT images. The best object recognition strategy varies among objects; the proposed framework however allows employing a strategy that is optimal for each object.« less

  18. Deactivation of the left dorsolateral prefrontal cortex in Prader-Willi syndrome after meal consumption.

    PubMed

    Reinhardt, M; Parigi, A D; Chen, K; Reiman, E M; Thiyyagura, P; Krakoff, J; Hohenadel, M G; Le, D S N T; Weise, C M

    2016-09-01

    Prader-Willi syndrome (PWS) is a type of human genetic obesity that may give us information regarding the physiology of non-syndromic obesity. The objective of this study was to investigate the functional correlates of hunger and satiety in individuals with PWS in comparison with healthy controls with obesity, hypothesizing that we would see significant differences in activation in the left dorsolateral prefrontal cortex (DLPFC) based on prior findings. This study compared the central effects of food consumption in nine individuals with PWS (7 men, 2 women; body fat 35.3±10.0%) and seven controls (7 men; body fat 28.8±7.6%), matched for percentage body fat. H2(15)O-PET (positron emission tomography) scans were performed before and after consumption of a standardized liquid meal to obtain quantitative measures of regional cerebral blood flow (rCBF), a marker of neuronal activity. Compared with obese controls, PWS showed altered (P<0.05 family-wise error cluster-level corrected; voxelwise P<0.001) rCBF before and after meal consumption in multiple brain regions. There was a significant differential rCBF response within the left DLPFC after meal ingestion with decreases in DLPFC rCBF in PWS; in controls, DLPFC rCBF tended to remain unchanged. In more liberal analyses (P<0.05 family-wise error cluster-level corrected; voxelwise P<0.005), rCBF of the right orbitofrontal cortex (OFC) increased in PWS and decreased in controls. In PWS, ΔrCBF of the right OFC was associated with changes in appetite ratings. The pathophysiology of eating behavior in PWS is characterized by a paradoxical meal-induced deactivation of the left DLPFC and activation in the right OFC, brain regions implicated in the central regulation of eating behavior.

  19. Power of mental health nursing research: a statistical analysis of studies in the International Journal of Mental Health Nursing.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2013-02-01

    Having sufficient power to detect effect sizes of an expected magnitude is a core consideration when designing studies in which inferential statistics will be used. The main aim of this study was to investigate the statistical power in studies published in the International Journal of Mental Health Nursing. From volumes 19 (2010) and 20 (2011) of the journal, studies were analysed for their power to detect small, medium, and large effect sizes, according to Cohen's guidelines. The power of the 23 studies included in this review to detect small, medium, and large effects was 0.34, 0.79, and 0.94, respectively. In 90% of papers, no adjustments for experiment-wise error were reported. With a median of nine inferential tests per paper, the mean experiment-wise error rate was 0.51. A priori power analyses were only reported in 17% of studies. Although effect sizes for correlations and regressions were routinely reported, effect sizes for other tests (χ(2)-tests, t-tests, ANOVA/MANOVA) were largely absent from the papers. All types of effect sizes were infrequently interpreted. Researchers are strongly encouraged to conduct power analyses when designing studies, and to avoid scattergun approaches to data analysis (i.e. undertaking large numbers of tests in the hope of finding 'significant' results). Because reviewing effect sizes is essential for determining the clinical significance of study findings, researchers would better serve the field of mental health nursing if they reported and interpreted effect sizes. © 2012 The Authors. International Journal of Mental Health Nursing © 2012 Australian College of Mental Health Nurses Inc.

  20. Wise regulates bone deposition through genetic interactions with Lrp5.

    PubMed

    Ellies, Debra L; Economou, Androulla; Viviano, Beth; Rey, Jean-Philippe; Paine-Saunders, Stephenie; Krumlauf, Robb; Saunders, Scott

    2014-01-01

    In this study using genetic approaches in mouse we demonstrate that the secreted protein Wise plays essential roles in regulating early bone formation through its ability to modulate Wnt signaling via interactions with the Lrp5 co-receptor. In Wise-/- mutant mice we find an increase in the rate of osteoblast proliferation and a transient increase in bone mineral density. This change in proliferation is dependent upon Lrp5, as Wise;Lrp5 double mutants have normal bone mass. This suggests that Wise serves as a negative modulator of Wnt signaling in active osteoblasts. Wise and the closely related protein Sclerostin (Sost) are expressed in osteoblast cells during temporally distinct early and late phases in a manner consistent with the temporal onset of their respective increased bone density phenotypes. These data suggest that Wise and Sost may have common roles in regulating bone development through their ability to control the balance of Wnt signaling. We find that Wise is also required to potentiate proliferation in chondrocytes, serving as a potential positive modulator of Wnt activity. Our analyses demonstrate that Wise plays a key role in processes that control the number of osteoblasts and chondrocytes during bone homeostasis and provide important insight into mechanisms regulating the Wnt pathway during skeletal development.

  1. Med Wise: A theory-based program to improve older adults' communication with pharmacists about their medicines.

    PubMed

    Martin, B A; Chewning, B A; Margolis, A R; Wilson, D A; Renken, J

    2016-01-01

    The health and economic toll of medication errors by older adults is well documented. Poor communication and medication coordination problems increase the likelihood of adverse drug events (ADEs). Older adults have difficulty communicating with health care professionals, including pharmacists. As such, the theory-based Med Wise program was designed. Building on the Self-efficacy Framework and the Chronic Care Model, this program was tested with community-dwelling older adults. This study and its resultant paper: (1) describe the theory-based design of the Med Wise program; (2) describe the collaboration of multiple community partners to develop a sustainable model for implementing Med Wise; and (3) present findings from the Med Wise course evaluation. Med Wise was designed to be a sustainable, skill-based educational and behavior change program consisting of two, 2-h interactive classes to enhance participants' medication communication skills and self-efficacy. To explore the potential to disseminate Med Wise throughout the state, a partnership was formed between the pharmacy team and the statewide Aging & Disability Resource Centers (ADRCs), as well as the Community-Academic Aging Research Network (CAARN). Over 30 lay volunteer leaders in 8 Wisconsin (U.S. State) counties were trained, and they delivered Med Wise through ADRC community centers. The CAARN staff evaluated the fidelity of the course delivery by leaders. To evaluate Med Wise, a quasi-experimental design using pre/post surveys assessed knowledge, worry and self-efficacy. A telephone follow-up three months later assessed self-efficacy and translation of medication management skills and behaviors. Med Wise programs were presented to 198 community-dwelling older adults while maintaining program fidelity. This evaluation found significant increases in older adults' knowledge about pharmacists' roles and responsibilities, likelihood of talking with a pharmacist about medication concerns, and self-efficacy for communicating with pharmacists. At the 3 month follow-up, participants reported increased interactions with pharmacists, with 29.2% of participants reported seeking medication reviews and 28.5% medication schedule reviews. The two-class Med Wise program showed sustained impact at 3 months on key outcomes. Further, the community partners successfully implemented the program with fidelity across 8 counties suggesting its ability to be disseminated and sustained. Future directions include expanding the program to examine wider adoption, and measuring program impact on regimen safety and health outcomes linked to increases in patient engagement. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Infrared excesses in stars with and without planets using revised WISE photometry

    NASA Astrophysics Data System (ADS)

    Maldonado, Raul F.; Chavez, Miguel; Bertone, Emanuele; Cruz-Saenz de Miera, Fernando

    2017-11-01

    We present an analysis on the potential prevalence of mid-infrared excesses in stars with and without planetary companions. Based on an extended data base of stars detected with the Wide Infrared Survey Explorer (WISE) satellite, we studied two stellar samples: one with 236 planet hosts and another with 986 objects for which planets have been searched, but not found. We determined the presence of an excess over the photosphere by comparing the observed flux ratio at 22 and 12 μm (f22/f12) with the corresponding synthetic value, derived from results of classical model photospheres. We found a detection rate of 0.85 per cent at 22 μm (two excesses) in the sample of stars with planets and 0.1 per cent (1 detection) for the stars without planets. The difference of the detection rate between the two samples is not statistically significant, a result that is independent of the different approaches found in the literature to define an excess in the wavelength range covered by WISE observations. As an additional result, we found that the WISE fluxes required a normalization procedure to make them compatible with synthetic data, probably pointing out a revision of the WISE data calibration.

  3. Adaptive graph-based multiple testing procedures

    PubMed Central

    Klinglmueller, Florian; Posch, Martin; Koenig, Franz

    2016-01-01

    Multiple testing procedures defined by directed, weighted graphs have recently been proposed as an intuitive visual tool for constructing multiple testing strategies that reflect the often complex contextual relations between hypotheses in clinical trials. Many well-known sequentially rejective tests, such as (parallel) gatekeeping tests or hierarchical testing procedures are special cases of the graph based tests. We generalize these graph-based multiple testing procedures to adaptive trial designs with an interim analysis. These designs permit mid-trial design modifications based on unblinded interim data as well as external information, while providing strong family wise error rate control. To maintain the familywise error rate, it is not required to prespecify the adaption rule in detail. Because the adaptive test does not require knowledge of the multivariate distribution of test statistics, it is applicable in a wide range of scenarios including trials with multiple treatment comparisons, endpoints or subgroups, or combinations thereof. Examples of adaptations are dropping of treatment arms, selection of subpopulations, and sample size reassessment. If, in the interim analysis, it is decided to continue the trial as planned, the adaptive test reduces to the originally planned multiple testing procedure. Only if adaptations are actually implemented, an adjusted test needs to be applied. The procedure is illustrated with a case study and its operating characteristics are investigated by simulations. PMID:25319733

  4. Effect of Variations in IRU Integration Time Interval On Accuracy of Aqua Attitude Estimation

    NASA Technical Reports Server (NTRS)

    Natanson, G. A.; Tracewell, Dave

    2003-01-01

    During Aqua launch support, attitude analysts noticed several anomalies in Onboard Computer (OBC) rates and in rates computed by the ground Attitude Determination System (ADS). These included: 1) periodic jumps in the OBC pitch rate every 2 minutes; 2) spikes in ADS pitch rate every 4 minutes; 3) close agreement between pitch rates computed by ADS and those derived from telemetered OBC quaternions (in contrast to the step-wise pattern observed for telemetered OBC rates); 4) spikes of +/- 10 milliseconds in telemetered IRU integration time every 4 minutes (despite the fact that telemetered time tags of any two sequential IRU measurements were always 1 second apart from each other). An analysis presented in the paper explains this anomalous behavior by a small average offset of about 0.5 +/- 0.05 microsec in the time interval between two sequential accumulated angle measurements. It is shown that errors in the estimated pitch angle due to neglecting the aforementioned variations in the integration time interval by the OBC is within +/- 2 arcseconds. Ground attitude solutions are found to be accurate enough to see the effect of the variations on the accuracy of the estimated pitch angle.

  5. Fade-resistant forward error correction method for free-space optical communications systems

    DOEpatents

    Johnson, Gary W.; Dowla, Farid U.; Ruggiero, Anthony J.

    2007-10-02

    Free-space optical (FSO) laser communication systems offer exceptionally wide-bandwidth, secure connections between platforms that cannot other wise be connected via physical means such as optical fiber or cable. However, FSO links are subject to strong channel fading due to atmospheric turbulence and beam pointing errors, limiting practical performance and reliability. We have developed a fade-tolerant architecture based on forward error correcting codes (FECs) combined with delayed, redundant, sub-channels. This redundancy is made feasible though dense wavelength division multiplexing (WDM) and/or high-order M-ary modulation. Experiments and simulations show that error-free communications is feasible even when faced with fades that are tens of milliseconds long. We describe plans for practical implementation of a complete system operating at 2.5 Gbps.

  6. Wise Regulates Bone Deposition through Genetic Interactions with Lrp5

    PubMed Central

    Ellies, Debra L.; Economou, Androulla; Viviano, Beth; Rey, Jean-Philippe; Paine-Saunders, Stephenie; Krumlauf, Robb; Saunders, Scott

    2014-01-01

    In this study using genetic approaches in mouse we demonstrate that the secreted protein Wise plays essential roles in regulating early bone formation through its ability to modulate Wnt signaling via interactions with the Lrp5 co-receptor. In Wise−/− mutant mice we find an increase in the rate of osteoblast proliferation and a transient increase in bone mineral density. This change in proliferation is dependent upon Lrp5, as Wise;Lrp5 double mutants have normal bone mass. This suggests that Wise serves as a negative modulator of Wnt signaling in active osteoblasts. Wise and the closely related protein Sclerostin (Sost) are expressed in osteoblast cells during temporally distinct early and late phases in a manner consistent with the temporal onset of their respective increased bone density phenotypes. These data suggest that Wise and Sost may have common roles in regulating bone development through their ability to control the balance of Wnt signaling. We find that Wise is also required to potentiate proliferation in chondrocytes, serving as a potential positive modulator of Wnt activity. Our analyses demonstrate that Wise plays a key role in processes that control the number of osteoblasts and chondrocytes during bone homeostasis and provide important insight into mechanisms regulating the Wnt pathway during skeletal development. PMID:24789067

  7. Galaxy and Mass Assembly (GAMA): Mid-infrared Properties and Empirical Relations from WISE

    NASA Astrophysics Data System (ADS)

    Cluver, M. E.; Jarrett, T. H.; Hopkins, A. M.; Driver, S. P.; Liske, J.; Gunawardhana, M. L. P.; Taylor, E. N.; Robotham, A. S. G.; Alpaslan, M.; Baldry, I.; Brown, M. J. I.; Peacock, J. A.; Popescu, C. C.; Tuffs, R. J.; Bauer, A. E.; Bland-Hawthorn, J.; Colless, M.; Holwerda, B. W.; Lara-López, M. A.; Leschinski, K.; López-Sánchez, A. R.; Norberg, P.; Owers, M. S.; Wang, L.; Wilkins, S. M.

    2014-02-01

    The Galaxy And Mass Assembly (GAMA) survey furnishes a deep redshift catalog that, when combined with the Wide-field Infrared Survey Explorer (WISE), allows us to explore for the first time the mid-infrared properties of >110, 000 galaxies over 120 deg2 to z ~= 0.5. In this paper we detail the procedure for producing the matched GAMA-WISE catalog for the G12 and G15 fields, in particular characterizing and measuring resolved sources; the complete catalogs for all three GAMA equatorial fields will be made available through the GAMA public releases. The wealth of multiwavelength photometry and optical spectroscopy allows us to explore empirical relations between optically determined stellar mass (derived from synthetic stellar population models) and 3.4 μm and 4.6 μm WISE measurements. Similarly dust-corrected Hα-derived star formation rates can be compared to 12 μm and 22 μm luminosities to quantify correlations that can be applied to large samples to z < 0.5. To illustrate the applications of these relations, we use the 12 μm star formation prescription to investigate the behavior of specific star formation within the GAMA-WISE sample and underscore the ability of WISE to detect star-forming systems at z ~ 0.5. Within galaxy groups (determined by a sophisticated friends-of-friends scheme), results suggest that galaxies with a neighbor within 100 h -1 kpc have, on average, lower specific star formation rates than typical GAMA galaxies with the same stellar mass.

  8. Helium Mass Spectrometer Leak Detection: A Method to Quantify Total Measurement Uncertainty

    NASA Technical Reports Server (NTRS)

    Mather, Janice L.; Taylor, Shawn C.

    2015-01-01

    In applications where leak rates of components or systems are evaluated against a leak rate requirement, the uncertainty of the measured leak rate must be included in the reported result. However, in the helium mass spectrometer leak detection method, the sensitivity, or resolution, of the instrument is often the only component of the total measurement uncertainty noted when reporting results. To address this shortfall, a measurement uncertainty analysis method was developed that includes the leak detector unit's resolution, repeatability, hysteresis, and drift, along with the uncertainty associated with the calibration standard. In a step-wise process, the method identifies the bias and precision components of the calibration standard, the measurement correction factor (K-factor), and the leak detector unit. Together these individual contributions to error are combined and the total measurement uncertainty is determined using the root-sum-square method. It was found that the precision component contributes more to the total uncertainty than the bias component, but the bias component is not insignificant. For helium mass spectrometer leak rate tests where unit sensitivity alone is not enough, a thorough evaluation of the measurement uncertainty such as the one presented herein should be performed and reported along with the leak rate value.

  9. Cure-WISE: HETDEX data reduction with Astro-WISE

    NASA Astrophysics Data System (ADS)

    Snigula, J. M.; Cornell, M. E.; Drory, N.; Fabricius, Max.; Landriau, M.; Hill, G. J.; Gebhardt, K.

    2012-09-01

    The Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) is a blind spectroscopic survey to map the evolution of dark energy using Lyman-alpha emitting galaxies at redshifts 1:9 < z < 3:5 as tracers. The survey instrument, VIRUS, consists of 75 IFUs distributed across the 22-arcmin field of the upgraded 9.2-m HET. Each exposure gathers 33,600 spectra. Over the projected five year run of the survey we expect about 170 GB of data per night. For the data reduction we developed the Cure pipeline. Cure is designed to automatically find and calibrate the observed spectra, subtract the sky background, and detect and classify different types of sources. Cure employs rigorous statistical methods and complete pixel-level error propagation throughout the reduction process to ensure Poisson-limited performance and meaningful significance values. To automate the reduction of the whole dataset we implemented the Cure pipeline in the Astro-WISE framework. This integration provides for HETDEX a database backend with complete dependency tracking of the various reduction steps, automated checks, and a searchable interface to the detected sources and user management. It can be used to create various web interfaces for data access and quality control. Astro-WISE allows us to reduce the data from all the IFUs in parallel on a compute cluster. This cluster allows us to reduce the observed data in quasi real time and still have excess capacity for rerunning parts of the reduction. Finally, the Astro-WISE interface will be used to provide access to reduced data products to the general community.

  10. Error correcting coding-theory for structured light illumination systems

    NASA Astrophysics Data System (ADS)

    Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben

    2017-06-01

    Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.

  11. Probabilistic Amplitude Shaping With Hard Decision Decoding and Staircase Codes

    NASA Astrophysics Data System (ADS)

    Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi; Steiner, Fabian

    2018-05-01

    We consider probabilistic amplitude shaping (PAS) as a means of increasing the spectral efficiency of fiber-optic communication systems. In contrast to previous works in the literature, we consider probabilistic shaping with hard decision decoding (HDD). In particular, we apply the PAS recently introduced by B\\"ocherer \\emph{et al.} to a coded modulation (CM) scheme with bit-wise HDD that uses a staircase code as the forward error correction code. We show that the CM scheme with PAS and staircase codes yields significant gains in spectral efficiency with respect to the baseline scheme using a staircase code and a standard constellation with uniformly distributed signal points. Using a single staircase code, the proposed scheme achieves performance within $0.57$--$1.44$ dB of the corresponding achievable information rate for a wide range of spectral efficiencies.

  12. Localized Glaucomatous Change Detection within the Proper Orthogonal Decomposition Framework

    PubMed Central

    Balasubramanian, Madhusudhanan; Kriegman, David J.; Bowd, Christopher; Holst, Michael; Weinreb, Robert N.; Sample, Pamela A.; Zangwill, Linda M.

    2012-01-01

    Purpose. To detect localized glaucomatous structural changes using proper orthogonal decomposition (POD) framework with false-positive control that minimizes confirmatory follow-ups, and to compare the results to topographic change analysis (TCA). Methods. We included 167 participants (246 eyes) with ≥4 Heidelberg Retina Tomograph (HRT)-II exams from the Diagnostic Innovations in Glaucoma Study; 36 eyes progressed by stereo-photographs or visual fields. All other patient eyes (n = 210) were non-progressing. Specificities were evaluated using 21 normal eyes. Significance of change at each HRT superpixel between each follow-up and its nearest baseline (obtained using POD) was estimated using mixed-effects ANOVA. Locations with significant reduction in retinal height (red pixels) were determined using Bonferroni, Lehmann-Romano k-family-wise error rate (k-FWER), and Benjamini-Hochberg false discovery rate (FDR) type I error control procedures. Observed positive rate (OPR) in each follow-up was calculated as a ratio of number of red pixels within disk to disk size. Progression by POD was defined as one or more follow-ups with OPR greater than the anticipated false-positive rate. TCA was evaluated using the recently proposed liberal, moderate, and conservative progression criteria. Results. Sensitivity in progressors, specificity in normals, and specificity in non-progressors, respectively, were POD-Bonferroni = 100%, 0%, and 0%; POD k-FWER = 78%, 86%, and 43%; POD-FDR = 78%, 86%, and 43%; POD k-FWER with retinal height change ≥50 μm = 61%, 95%, and 60%; TCA-liberal = 86%, 62%, and 21%; TCA-moderate = 53%, 100%, and 70%; and TCA-conservative = 17%, 100%, and 84%. Conclusions. With a stronger control of type I errors, k-FWER in POD framework minimized confirmatory follow-ups while providing diagnostic accuracy comparable to TCA. Thus, POD with k-FWER shows promise to reduce the number of confirmatory follow-ups required for clinical care and studies evaluating new glaucoma treatments. (ClinicalTrials.gov number, NCT00221897.) PMID:22491406

  13. Update on the Wide-field Infrared Survey Explorer (WISE)

    NASA Technical Reports Server (NTRS)

    Mainzer, Amanda K.; Eisenhardt, Peter; Wright, Edward L.; Liu, Feng-Chuan; Irace, William; Heinrichsen, Ingolf; Cutri, Roc; Duval, Valerie

    2006-01-01

    The Wide-field Infrared Survey Explorer (WISE), a NASA MIDEX mission, will survey the entire sky in four bands from 3.3 to 23 microns with a sensitivity 1000 times greater than the IRAS survey. The WISE survey will extend the Two Micron All Sky Survey into the thermal infrared and will provide an important catalog for the James Webb Space Telescope. Using 1024(sup 2) HgCdTe and Si:As arrays at 3.3, 4.7, 12 and 23 microns, WISE will find the most luminous galaxies in the universe, the closest stars to the Sun, and it will detect most of the main belt asteroids larger than 3 km. The single WISE instrument consists of a 40 cm diamond-turned aluminum afocal telescope, a two-stage solid hydrogen cryostat, a scan mirror mechanism, and reimaging optics giving 5 resolution (full-width-half-maximum). The use of dichroics and beamsplitters allows four color images of a 47' x47' field of view to be taken every 8.8 seconds, synchronized with the orbital motion to provide total sky coverage with overlap between revolutions. WISE will be placed into a Sun-synchronous polar orbit on a Delta 7320-10 launch vehicle. The WISE survey approach is simple and efficient. The three-axis-stabilized spacecraft rotates at a constant rate while the scan mirror freezes the telescope line of sight during each exposure. WISE has completed its mission Preliminary Design Review and its NASA Confirmation Review, and the project is awaiting confirmation from NASA to proceed to the Critical Design phase. Much of the payload hardware is now complete, and assembly of the payload will occur over the next year. WISE is scheduled to launch in late 2009; the project web site can be found at www.wise.ssl.berkeley.edu.

  14. Achievable Information Rates for Coded Modulation With Hard Decision Decoding for Coherent Fiber-Optic Systems

    NASA Astrophysics Data System (ADS)

    Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi

    2017-12-01

    We analyze the achievable information rates (AIRs) for coded modulation schemes with QAM constellations with both bit-wise and symbol-wise decoders, corresponding to the case where a binary code is used in combination with a higher-order modulation using the bit-interleaved coded modulation (BICM) paradigm and to the case where a nonbinary code over a field matched to the constellation size is used, respectively. In particular, we consider hard decision decoding, which is the preferable option for fiber-optic communication systems where decoding complexity is a concern. Recently, Liga \\emph{et al.} analyzed the AIRs for bit-wise and symbol-wise decoders considering what the authors called \\emph{hard decision decoder} which, however, exploits \\emph{soft information} of the transition probabilities of discrete-input discrete-output channel resulting from the hard detection. As such, the complexity of the decoder is essentially the same as the complexity of a soft decision decoder. In this paper, we analyze instead the AIRs for the standard hard decision decoder, commonly used in practice, where the decoding is based on the Hamming distance metric. We show that if standard hard decision decoding is used, bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As a result, contrary to the conclusion by Liga \\emph{et al.}, binary decoders together with the BICM paradigm are preferable for spectrally-efficient fiber-optic systems. We also design binary and nonbinary staircase codes and show that, in agreement with the AIRs, binary codes yield better performance.

  15. Quantifying error of lidar and sodar Doppler beam swinging measurements of wind turbine wakes using computational fluid dynamics

    DOE PAGES

    Lundquist, J. K.; Churchfield, M. J.; Lee, S.; ...

    2015-02-23

    Wind-profiling lidars are now regularly used in boundary-layer meteorology and in applications such as wind energy and air quality. Lidar wind profilers exploit the Doppler shift of laser light backscattered from particulates carried by the wind to measure a line-of-sight (LOS) velocity. The Doppler beam swinging (DBS) technique, used by many commercial systems, considers measurements of this LOS velocity in multiple radial directions in order to estimate horizontal and vertical winds. The method relies on the assumption of homogeneous flow across the region sampled by the beams. Using such a system in inhomogeneous flow, such as wind turbine wakes ormore » complex terrain, will result in errors. To quantify the errors expected from such violation of the assumption of horizontal homogeneity, we simulate inhomogeneous flow in the atmospheric boundary layer, notably stably stratified flow past a wind turbine, with a mean wind speed of 6.5 m s -1 at the turbine hub-height of 80 m. This slightly stable case results in 15° of wind direction change across the turbine rotor disk. The resulting flow field is sampled in the same fashion that a lidar samples the atmosphere with the DBS approach, including the lidar range weighting function, enabling quantification of the error in the DBS observations. The observations from the instruments located upwind have small errors, which are ameliorated with time averaging. However, the downwind observations, particularly within the first two rotor diameters downwind from the wind turbine, suffer from errors due to the heterogeneity of the wind turbine wake. Errors in the stream-wise component of the flow approach 30% of the hub-height inflow wind speed close to the rotor disk. Errors in the cross-stream and vertical velocity components are also significant: cross-stream component errors are on the order of 15% of the hub-height inflow wind speed (1.0 m s −1) and errors in the vertical velocity measurement exceed the actual vertical velocity. By three rotor diameters downwind, DBS-based assessments of wake wind speed deficits based on the stream-wise velocity can be relied on even within the near wake within 1.0 s -1 (or 15% of the hub-height inflow wind speed), and the cross-stream velocity error is reduced to 8% while vertical velocity estimates are compromised. Furthermore, measurements of inhomogeneous flow such as wind turbine wakes are susceptible to these errors, and interpretations of field observations should account for this uncertainty.« less

  16. Quantifying error of lidar and sodar Doppler beam swinging measurements of wind turbine wakes using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Lundquist, J. K.; Churchfield, M. J.; Lee, S.; Clifton, A.

    2015-02-01

    Wind-profiling lidars are now regularly used in boundary-layer meteorology and in applications such as wind energy and air quality. Lidar wind profilers exploit the Doppler shift of laser light backscattered from particulates carried by the wind to measure a line-of-sight (LOS) velocity. The Doppler beam swinging (DBS) technique, used by many commercial systems, considers measurements of this LOS velocity in multiple radial directions in order to estimate horizontal and vertical winds. The method relies on the assumption of homogeneous flow across the region sampled by the beams. Using such a system in inhomogeneous flow, such as wind turbine wakes or complex terrain, will result in errors. To quantify the errors expected from such violation of the assumption of horizontal homogeneity, we simulate inhomogeneous flow in the atmospheric boundary layer, notably stably stratified flow past a wind turbine, with a mean wind speed of 6.5 m s-1 at the turbine hub-height of 80 m. This slightly stable case results in 15° of wind direction change across the turbine rotor disk. The resulting flow field is sampled in the same fashion that a lidar samples the atmosphere with the DBS approach, including the lidar range weighting function, enabling quantification of the error in the DBS observations. The observations from the instruments located upwind have small errors, which are ameliorated with time averaging. However, the downwind observations, particularly within the first two rotor diameters downwind from the wind turbine, suffer from errors due to the heterogeneity of the wind turbine wake. Errors in the stream-wise component of the flow approach 30% of the hub-height inflow wind speed close to the rotor disk. Errors in the cross-stream and vertical velocity components are also significant: cross-stream component errors are on the order of 15% of the hub-height inflow wind speed (1.0 m s-1) and errors in the vertical velocity measurement exceed the actual vertical velocity. By three rotor diameters downwind, DBS-based assessments of wake wind speed deficits based on the stream-wise velocity can be relied on even within the near wake within 1.0 m s-1 (or 15% of the hub-height inflow wind speed), and the cross-stream velocity error is reduced to 8% while vertical velocity estimates are compromised. Measurements of inhomogeneous flow such as wind turbine wakes are susceptible to these errors, and interpretations of field observations should account for this uncertainty.

  17. Finite cohesion due to chain entanglement in polymer melts.

    PubMed

    Cheng, Shiwang; Lu, Yuyuan; Liu, Gengxin; Wang, Shi-Qing

    2016-04-14

    Three different types of experiments, quiescent stress relaxation, delayed rate-switching during stress relaxation, and elastic recovery after step strain, are carried out in this work to elucidate the existence of a finite cohesion barrier against free chain retraction in entangled polymers. Our experiments show that there is little hastened stress relaxation from step-wise shear up to γ = 0.7 and step-wise extension up to the stretching ratio λ = 1.5 at any time before or after the Rouse time. In contrast, a noticeable stress drop stemming from the built-in barrier-free chain retraction is predicted using the GLaMM model. In other words, the experiment reveals a threshold magnitude of step-wise deformation below which the stress relaxation follows identical dynamics whereas the GLaMM or Doi-Edwards model indicates a monotonic acceleration of the stress relaxation dynamics as a function of the magnitude of the step-wise deformation. Furthermore, a sudden application of startup extension during different stages of stress relaxation after a step-wise extension, i.e. the delayed rate-switching experiment, shows that the geometric condensation of entanglement strands in the cross-sectional area survives beyond the reptation time τd that is over 100 times the Rouse time τR. Our results point to the existence of a cohesion barrier that can prevent free chain retraction upon moderate deformation in well-entangled polymer melts.

  18. Cure-WISE: HETDEX Data Reduction with Astro-WISE

    NASA Astrophysics Data System (ADS)

    Snigula, J. M.; Drory, N.; Fabricius, M.; Landriau, M.; Montesano, F.; Hill, G. J.; Gebhardt, K.; Cornell, M. E.

    2014-05-01

    The Hobby-Eberly Telescope Dark Energy Experiment (HETDEX, Hill et al. 2012b) is a blind spectroscopic survey to map the evolution of dark energy using Lyman-alpha emitting galaxies at redshifts 1.9< ɀ <3.5 as tracers. The survey will use an array of 75 integral field spectrographs called the Visible Integral field Replicable Unit (IFU) Spectrograph (VIRUS, Hill et al. 2012c). The 10m HET (Ramsey et al. 1998) currently receives a wide-field upgrade (Hill et al. 2012a) to accomodate the spectrographs and to provide the needed field of view. Over the projected five year run of the survey we expect to obtain approximately 170 GB of data each night. For the data reduction we developed the Cure pipeline, to automatically find and calibrate the observed spectra, subtract the sky background, and detect and classify different types of sources. Cure employs rigorous statistical methods and complete pixel-level error propagation throughout the reduction process to ensure Poisson-limited performance and meaningful significance values. To automate the reduction of the whole dataset we implemented the Cure pipeline in the Astro-WISE framework. This integration provides for HETDEX a database backend with complete dependency tracking of the various reduction steps, automated checks, and a searchable interface to the detected sources and user management. It can be used to create various web interfaces for data access and quality control. Astro-WISE allows us to reduce the data from all the IFUs in parallel on a compute cluster. This cluster allows us to reduce the observed data in quasi real time and still have excess capacity for rerunning parts of the reduction. Finally, the Astro-WISE interface will be used to provide access to reduced data products to the general community.

  19. Modelling airborne gravity data by means of adapted Space-Wise approach

    NASA Astrophysics Data System (ADS)

    Sampietro, Daniele; Capponi, Martina; Hamdi Mansi, Ahmed; Gatti, Andrea

    2017-04-01

    Regional gravity field modelling by means of remove - restore procedure is nowadays widely applied to predict grids of gravity anomalies (Bouguer, free-air, isostatic, etc.) in gravimetric geoid determination as well as in exploration geophysics. Considering this last application, due to the required accuracy and resolution, airborne gravity observations are generally adopted. However due to the relatively high acquisition velocity, presence of atmospheric turbulence, aircraft vibration, instrumental drift, etc. airborne data are contaminated by a very high observation error. For this reason, a proper procedure to filter the raw observations both in the low and high frequency should be applied to recover valuable information. In this work, a procedure to predict a grid or a set of filtered along track gravity anomalies, by merging GGM and airborne dataset, is presented. The proposed algorithm, like the Space-Wise approach developed by Politecnico di Milano in the framework of GOCE data analysis, is based on a combination of along track Wiener filter and Least Squares Collocation adjustment and properly considers the different altitudes of the gravity observations. Among the main differences with respect to the satellite application of the Space-Wise approach there is the fact that, while in processing GOCE data the stochastic characteristics of the observation error can be considered a-priori well known, in airborne gravimetry, due to the complex environment in which the observations are acquired, these characteristics are unknown and should be retrieved from the dataset itself. Some innovative theoretical aspects focusing in particular on the theoretical covariance modelling are presented too. In the end, the goodness of the procedure is evaluated by means of a test on real data recovering the gravitational signal with a predicted accuracy of about 0.25 mGal.

  20. WISE: Winning with Stronger Education. ACCESS Research Final Report [and] Executive Summary.

    ERIC Educational Resources Information Center

    Carciun & Associates, Anchorage, AK.

    The Winning with Stronger Education Project (WISE) was designed to develop new ways of educating and training the multicultural population of Anchorage, Alaska. Data were obtained from several sources: a mail survey of 1,600 Anchorage residents (which procured a 62 response rate); personal interviews with 25 key Alaska business leaders; a…

  1. Bit-Wise Arithmetic Coding For Compression Of Data

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron

    1996-01-01

    Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.

  2. Preliminary GOES-R ABI navigation and registration assessment results

    NASA Astrophysics Data System (ADS)

    Tan, B.; Dellomo, J.; Wolfe, R. E.; Reth, A. D.

    2017-12-01

    The US Geostationary Operational Environmental Satellite - R Series (GOES-R) was launched on November 19, 2016, and was designated GOESR-16 upon reaching geostationary orbit ten days later. The Advanced Baseline Imager (ABI) is the primary instrument on the GOES-R series for imaging Earth's surface and atmosphere to aid in weather prediction and climate monitoring. We developed algorithms and software for independent verification of the ABI Image Navigation and Registration (INR). Since late January 2017, four INR metrics have been continuously generated to monitor the ABI INR performance: navigation (NAV) error, channel-to-channel registration (CCR) error, frame-to-frame registration (FFR) error, and within-frame registration (WIFR) error. In this paper, we will describe the fundamental algorithm used for the image registration and briefly discuss the processing flow of INR Performance Assessment Tool Set (IPATS) developed for ABI INR. The assessment of the accuracy shows that IPATS measurements error is about 1/20 of the size of a pixel. Then the GOES-16 NAV assessments results, the primary metric, from January to August 2017, will be presented. The INR has improved over time as post-launch tests were performed and corrections were applied. The mean NAV error of the visible and near infrared (VNIR) channels dropped from 20 μrad in January to around 5 μrad (+/-4 μrad, 1 σ) in June, while the mean NAV error of long wave infrared (LWIR) channels dropped from around 70 μrad in January to around 5 μrad (+/-15 μrad, 1 σ) in June. A full global ABI image is composed with 22 east-west direction swaths. The swath-wise NAV error analysis shows that there was some variation in the mean swath-wise NAV errors. The variations are about as much as 20% of the scene NAV mean errors. As expected, the swaths over the tropical area have far fewer valid assessments (matchups) than those in mid-latitude region due to cloud coverage. It was also found that there was a rotation (clocking) of the focal plane of LWIR that was seen in both the NAV and CCR results. The rotation was corrected by an INR update in June 2017. Through deep-dive examinations of the scenes with large mean and/or variation in INR errors, we validated that IPATS is an excellent tool for assessing and improving the GOES-16 ABI INR and is also useful in INR long-term monitoring.

  3. Using Large-Scale Linkage Data to Evaluate the Effectiveness of a National Educational Program on Antithrombotic Prescribing and Associated Stroke Prevention in Primary Care.

    PubMed

    Liu, Zhixin; Moorin, Rachael; Worthington, John; Tofler, Geoffrey; Bartlett, Mark; Khan, Rabia; Zuo, Yeqin

    2016-10-13

    The National Prescribing Service (NPS) MedicineWise Stroke Prevention Program, which was implemented nationally in 2009-2010 in Australia, sought to improve antithrombotic prescribing in stroke prevention using dedicated interventions that target general practitioners. This study evaluated the impact of the NPS MedicineWise Stroke Prevention Program on antithrombotic prescribing and primary stroke hospitalizations. This population-based time series study used administrative health data linked to 45 and Up Study participants with a high risk of cardiovascular disease (CVD) to assess the possible impact of the NPS MedicineWise program on first-time aspirin prescriptions and primary stroke-related hospitalizations. Time series analysis showed that the NPS MedicineWise program was significantly associated with increased first-time prescribing of aspirin (P=0.03) and decreased hospitalizations for primary ischemic stroke (P=0.03) in the at-risk study population (n=90 023). First-time aspirin prescription was correlated with a reduction in the rate of hospitalization for primary stroke (P=0.02). Following intervention, the number of first-time aspirin prescriptions increased by 19.8% (95% confidence interval, 1.6-38.0), while the number of first-time stroke hospitalizations decreased by 17.3% (95% confidence interval, 1.8-30.0). Consistent with NPS MedicineWise program messages for the high-risk CVD population, the NPS MedicineWise Stroke Prevention Program (2009) was associated with increased initiation of aspirin and a reduced rate of hospitalization for primary stroke. The findings suggest that the provision of evidence-based multifaceted large-scale educational programs in primary care can be effective in changing prescriber behavior and positively impacting patient health outcomes. © 2016 The Authors and NPS MedicineWise. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  4. Erratum: Wise Detections of Known Qsos at Redshifts Greater than Six

    NASA Technical Reports Server (NTRS)

    Blain, Andrew W.; Assef, Roberto; Stern, Daniel K.; Tsai, Chao-Wei; Eisenhardt, Peter; Bridge, Carrie; Benford, Dominic J; Jarrett, Tom; Cutri, Roc; Petty, Sara; hide

    2014-01-01

    In the published version of this paper, Roberto Assef was mistakenly affiliated with the Division of Astronomy and Astrophysics at the University of California, Los Angeles. This is incorrect. Dr. Assef's affiliation correctly appears in this erratum as the Nucleo de Astronomia de la Facultad de Ingenieria, Universidad Diego Portales, Av. Ejercito 441, Santiago, Chile. IOP Publishing sincerely regrets this error.

  5. The Perihelion Emission of Comet C/2010 L5 (WISE)

    NASA Astrophysics Data System (ADS)

    Kramer, E. A.; Bauer, J. M.; Fernandez, Y. R.; Stevenson, R.; Mainzer, A. K.; Grav, T.; Masiero, J.; Nugent, C.; Sonnett, S.

    2017-03-01

    The only Halley-type comet discovered by the Wide-Field Infrared Survey Explorer (WISE), C/2010 L5 (WISE), was imaged three times by WISE, and it showed a significant dust tail during the second and third visits (2010 June and July, respectively). We present here an analysis of the data collected by WISE, putting estimates on the comet’s size, dust production rate, gas production (CO+CO2) rate, and active fraction. We also present a detailed description of a novel tail-fitting technique that allows the commonly used syndyne-synchrone models to be used analytically, thereby giving more robust results. We find that C/2010 L5's dust tail was likely formed by strong emission, likely in the form of an outburst, occurring when the comet was within a few days of perihelion. Analyses of the June and July data independently agree on this result. The two separate epochs of dust tail analysis independently suggest a strong emission event close to perihelion. The average size of the dust particles in the dust tail increased between the epochs, suggesting that the dust was primarily released in a short period of time, and the smaller dust particles were quickly swept away by solar radiation pressure, leaving the larger particles behind. The difference in CO2 and dust production rates measured in 2010 June and July is not consistent with “normal” steady-state gas production from a comet at these heliocentric distances, suggesting that much of the detected CO2 and dust was produced in an episodic event. Together, these conclusions suggest that C/2010 L5 experienced a significant outburst event when the comet was close to perihelion.

  6. The Perihelion Emission of Comet C/2010 L5 ( WISE )

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kramer, E. A.; Bauer, J. M.; Stevenson, R.

    The only Halley-type comet discovered by the Wide-Field Infrared Survey Explorer ( WISE ), C/2010 L5 ( WISE ), was imaged three times by WISE , and it showed a significant dust tail during the second and third visits (2010 June and July, respectively). We present here an analysis of the data collected by WISE , putting estimates on the comet’s size, dust production rate, gas production (CO+CO{sub 2}) rate, and active fraction. We also present a detailed description of a novel tail-fitting technique that allows the commonly used syndyne–synchrone models to be used analytically, thereby giving more robust results.more » We find that C/2010 L5's dust tail was likely formed by strong emission, likely in the form of an outburst, occurring when the comet was within a few days of perihelion. Analyses of the June and July data independently agree on this result. The two separate epochs of dust tail analysis independently suggest a strong emission event close to perihelion. The average size of the dust particles in the dust tail increased between the epochs, suggesting that the dust was primarily released in a short period of time, and the smaller dust particles were quickly swept away by solar radiation pressure, leaving the larger particles behind. The difference in CO{sub 2} and dust production rates measured in 2010 June and July is not consistent with “normal” steady-state gas production from a comet at these heliocentric distances, suggesting that much of the detected CO{sub 2} and dust was produced in an episodic event. Together, these conclusions suggest that C/2010 L5 experienced a significant outburst event when the comet was close to perihelion.« less

  7. Automated segmentation of chronic stroke lesions using LINDA: Lesion Identification with Neighborhood Data Analysis

    PubMed Central

    Pustina, Dorian; Coslett, H. Branch; Turkeltaub, Peter E.; Tustison, Nicholas; Schwartz, Myrna F.; Avants, Brian

    2015-01-01

    The gold standard for identifying stroke lesions is manual tracing, a method that is known to be observer dependent and time consuming, thus impractical for big data studies. We propose LINDA (Lesion Identification with Neighborhood Data Analysis), an automated segmentation algorithm capable of learning the relationship between existing manual segmentations and a single T1-weighted MRI. A dataset of 60 left hemispheric chronic stroke patients is used to build the method and test it with k-fold and leave-one-out procedures. With respect to manual tracings, predicted lesion maps showed a mean dice overlap of 0.696±0.16, Hausdorff distance of 17.9±9.8mm, and average displacement of 2.54±1.38mm. The manual and predicted lesion volumes correlated at r=0.961. An additional dataset of 45 patients was utilized to test LINDA with independent data, achieving high accuracy rates and confirming its cross-institutional applicability. To investigate the cost of moving from manual tracings to automated segmentation, we performed comparative lesion-to-symptom mapping (LSM) on five behavioral scores. Predicted and manual lesions produced similar neuro-cognitive maps, albeit with some discussed discrepancies. Of note, region-wise LSM was more robust to the prediction error than voxel-wise LSM. Our results show that, while several limitations exist, our current results compete with or exceed the state-of-the-art, producing consistent predictions, very low failure rates, and transferable knowledge between labs. This work also establishes a new viewpoint on evaluating automated methods not only with segmentation accuracy but also with brain-behavior relationships. LINDA is made available online with trained models from over 100 patients. PMID:26756101

  8. Persistent antidepressant effect of low-dose ketamine and activation in the supplementary motor area and anterior cingulate cortex in treatment-resistant depression: A randomized control study.

    PubMed

    Chen, Mu-Hong; Li, Cheng-Ta; Lin, Wei-Chen; Hong, Chen-Jee; Tu, Pei-Chi; Bai, Ya-Mei; Cheng, Chih-Ming; Su, Tung-Ping

    2018-01-01

    A single low-dose ketamine infusion exhibited a rapid antidepressant effect within 1h. Despite its short biological half-life (approximately 3h), the antidepressant effect of ketamine has been demonstrated to persist for several days. However, changes in brain function responsible for the persistent antidepressant effect of a single low-dose ketamine infusion remain unclear METHODS: Twenty-four patients with treatment-resistant depression (TRD) were randomized into three groups according to the treatment received: 0.5mg/kg ketamine, 0.2mg/kg ketamine, and normal saline infusion. Standardized uptake values (SUVs) of glucose metabolism measured through 18 F-FDG positron-emission-tomography before infusion and 1day after a 40-min ketamine or normal saline infusion were used for subsequent whole-brain voxel-wise analysis and were correlated with depressive symptoms, as defined using the Hamilton Depression Rating Scale-17 (HDRS-17) score RESULTS: The voxel-wise analysis revealed that patients with TRD receiving the 0.5mg/kg ketamine infusion had significantly higher SUVs (corrected for family-wise errors, P = 0.014) in the supplementary motor area (SMA) and dorsal anterior cingulate cortex (dACC) than did those receiving the 0.2mg/kg ketamine infusion. The increase in the SUV in the dACC was negatively correlated with depressive symptoms at 1day after ketamine infusion DISCUSSION: The persistent antidepressant effect of a 0.5mg/kg ketamine infusion may be mediated by increased activation in the SMA and dACC. The higher increase in dACC activation was related to the reduction in depressive symptoms after ketamine infusion. A 0.5mg/kg ketamine infusion facilitated the glutamatergic neurotransmission in the SMA and dACC, which may be responsible for the persistent antidepressant effect of ketamine much beyond its half-life. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. VARIABLE SELECTION FOR QUALITATIVE INTERACTIONS IN PERSONALIZED MEDICINE WHILE CONTROLLING THE FAMILY-WISE ERROR RATE

    PubMed Central

    Gunter, Lacey; Zhu, Ji; Murphy, Susan

    2012-01-01

    For many years, subset analysis has been a popular topic for the biostatistics and clinical trials literature. In more recent years, the discussion has focused on finding subsets of genomes which play a role in the effect of treatment, often referred to as stratified or personalized medicine. Though highly sought after, methods for detecting subsets with altering treatment effects are limited and lacking in power. In this article we discuss variable selection for qualitative interactions with the aim to discover these critical patient subsets. We propose a new technique designed specifically to find these interaction variables among a large set of variables while still controlling for the number of false discoveries. We compare this new method against standard qualitative interaction tests using simulations and give an example of its use on data from a randomized controlled trial for the treatment of depression. PMID:22023676

  10. Statistical significance of combinatorial regulations

    PubMed Central

    Terada, Aika; Okada-Hatakeyama, Mariko; Tsuda, Koji; Sese, Jun

    2013-01-01

    More than three transcription factors often work together to enable cells to respond to various signals. The detection of combinatorial regulation by multiple transcription factors, however, is not only computationally nontrivial but also extremely unlikely because of multiple testing correction. The exponential growth in the number of tests forces us to set a strict limit on the maximum arity. Here, we propose an efficient branch-and-bound algorithm called the “limitless arity multiple-testing procedure” (LAMP) to count the exact number of testable combinations and calibrate the Bonferroni factor to the smallest possible value. LAMP lists significant combinations without any limit, whereas the family-wise error rate is rigorously controlled under the threshold. In the human breast cancer transcriptome, LAMP discovered statistically significant combinations of as many as eight binding motifs. This method may contribute to uncover pathways regulated in a coordinated fashion and find hidden associations in heterogeneous data. PMID:23882073

  11. Clinical Decisions Made in Primary Care Clinics Before and After Choosing Wisely.

    PubMed

    Kost, Amanda; Genao, Inginia; Lee, Jay W; Smith, Stephen R

    2015-01-01

    The Choosing Wisely campaign encourages physicians to avoid low-value care. Although widely lauded, no study has examined its impact on clinical decisions made in primary care settings. We compared clinical decisions made for 5 Choosing Wisely recommendations over two 6-month time periods before and after the campaign launch and an educational intervention to promote it at 3 primary care residency clinics. The rate of recommendations adherence was high (93.2%) at baseline but did significantly increase to 96.5% after the launch. These findings suggest primary care physicians respond to training and publicity in low-value care, though further research is needed. Given that even small decreases of physician test ordering can produce large cost savings, the Choosing Wisely project may help achieve the health care triple aim. © Copyright 2015 by the American Board of Family Medicine.

  12. Information systems as a tool to improve legal metrology activities

    NASA Astrophysics Data System (ADS)

    Rodrigues Filho, B. A.; Soratto, A. N. R.; Gonçalves, R. F.

    2016-07-01

    This study explores the importance of information systems applied to legal metrology as a tool to improve the control of measuring instruments used in trade. The information system implanted in Brazil has also helped to understand and appraise the control of the measurements due to the behavior of the errors and deviations of instruments used in trade, allowing the allocation of resources wisely, leading to a more effective planning and control on the legal metrology field. A study case analyzing the fuel sector is carried out in order to show the conformity of fuel dispersers according to maximum permissible errors. The statistics of measurement errors of 167,310 fuel dispensers of gasoline, ethanol and diesel used in the field were analyzed demonstrating the accordance of the fuel market in Brazil to the legal requirements.

  13. Comparison of two stand-alone CADe systems at multiple operating points

    NASA Astrophysics Data System (ADS)

    Sahiner, Berkman; Chen, Weijie; Pezeshk, Aria; Petrick, Nicholas

    2015-03-01

    Computer-aided detection (CADe) systems are typically designed to work at a given operating point: The device displays a mark if and only if the level of suspiciousness of a region of interest is above a fixed threshold. To compare the standalone performances of two systems, one approach is to select the parameters of the systems to yield a target false-positive rate that defines the operating point, and to compare the sensitivities at that operating point. Increasingly, CADe developers offer multiple operating points, which necessitates the comparison of two CADe systems involving multiple comparisons. To control the Type I error, multiple-comparison correction is needed for keeping the family-wise error rate (FWER) less than a given alpha-level. The sensitivities of a single modality at different operating points are correlated. In addition, the sensitivities of the two modalities at the same or different operating points are also likely to be correlated. It has been shown in the literature that when test statistics are correlated, well-known methods for controlling the FWER are conservative. In this study, we compared the FWER and power of three methods, namely the Bonferroni, step-up, and adjusted step-up methods in comparing the sensitivities of two CADe systems at multiple operating points, where the adjusted step-up method uses the estimated correlations. Our results indicate that the adjusted step-up method has a substantial advantage over other the two methods both in terms of the FWER and power.

  14. A New Flow-Diverter (the FloWise): In-Vivo Evaluation in an Elastase-Induced Rabbit Aneurysm Model.

    PubMed

    Kim, Byung Moon; Kim, Dong Joon; Kim, Dong Ik

    2016-01-01

    We aimed to evaluate the efficacy and safety of a newly developed, partially retrievable flow-diverter (the FloWise) in an elastase-induced rabbit aneurysm model. We developed a partially retrievable flow diverter composed of 48 strands of Nitinol and platinum wire. The FloWise is compatible with any microcatheter of 0.027-inch inner diameter, and is retrievable up to 70% deployment. The efficacy and safety of the FloWise were evaluated in the elastase-induced rabbit aneurysm model. The rate of technical success (full coverage of aneurysm neck) and assessment of aneurysm occlusion and stent patency was conducted by angiograms and histologic examinations at the 1-month, 3-month, and 6-month follow-up. The patency of small arterial branches (intercostal or lumbar arteries) covered by the FloWise were also assessed in the 5 subjects. We attempted FloWise insertion in a total of 32 aneurysm models. FloWise placement was successful in 31 subjects (96.9%). Two stents (6.2%) were occluded at the 3-month follow-up, but there was no evidence of in-stent stenosis in other subjects. All stented aneurysms showed progressive occlusion: grade I (complete aneurysm occlusion) in 44.4% and grade II (aneurysm occlusion > 90%) in 55.6% at 1 month; grade I in 90% and II in 10% at 3 months; and grade I in 90% and II in 10% at 6 months. All small arterial branches covered by the FloWise remained patent. A newly developed, partially retrievable flow-diverter seems to be a safe and effective tool of aneurysm occlusion, as evaluated in the rabbit aneurysm model.

  15. Impact of gradient timing error on the tissue sodium concentration bioscale measured using flexible twisted projection imaging

    NASA Astrophysics Data System (ADS)

    Lu, Aiming; Atkinson, Ian C.; Vaughn, J. Thomas; Thulborn, Keith R.

    2011-12-01

    The rapid biexponential transverse relaxation of the sodium MR signal from brain tissue requires efficient k-space sampling for quantitative imaging in a time that is acceptable for human subjects. The flexible twisted projection imaging (flexTPI) sequence has been shown to be suitable for quantitative sodium imaging with an ultra-short echo time to minimize signal loss. The fidelity of the k-space center location is affected by the readout gradient timing errors on the three physical axes, which is known to cause image distortion for projection-based acquisitions. This study investigated the impact of these timing errors on the voxel-wise accuracy of the tissue sodium concentration (TSC) bioscale measured with the flexTPI sequence. Our simulations show greater than 20% spatially varying quantification errors when the gradient timing errors are larger than 10 μs on all three axes. The quantification is more tolerant of gradient timing errors on the Z-axis. An existing method was used to measure the gradient timing errors with <1 μs error. The gradient timing error measurement is shown to be RF coil dependent, and timing error differences of up to ˜16 μs have been observed between different RF coils used on the same scanner. The measured timing errors can be corrected prospectively or retrospectively to obtain accurate TSC values.

  16. Auditory Proprioceptive Integration: Effects of Real-Time Kinematic Auditory Feedback on Knee Proprioception

    PubMed Central

    Ghai, Shashank; Schmitz, Gerd; Hwang, Tong-Hun; Effenberg, Alfred O.

    2018-01-01

    The purpose of the study was to assess the influence of real-time auditory feedback on knee proprioception. Thirty healthy participants were randomly allocated to control (n = 15), and experimental group I (15). The participants performed an active knee-repositioning task using their dominant leg, with/without additional real-time auditory feedback where the frequency was mapped in a convergent manner to two different target angles (40 and 75°). Statistical analysis revealed significant enhancement in knee re-positioning accuracy for the constant and absolute error with real-time auditory feedback, within and across the groups. Besides this convergent condition, we established a second divergent condition. Here, a step-wise transposition of frequency was performed to explore whether a systematic tuning between auditory-proprioceptive repositioning exists. No significant effects were identified in this divergent auditory feedback condition. An additional experimental group II (n = 20) was further included. Here, we investigated the influence of a larger magnitude and directional change of step-wise transposition of the frequency. In a first step, results confirm the findings of experiment I. Moreover, significant effects on knee auditory-proprioception repositioning were evident when divergent auditory feedback was applied. During the step-wise transposition participants showed systematic modulation of knee movements in the opposite direction of transposition. We confirm that knee re-positioning accuracy can be enhanced with concurrent application of real-time auditory feedback and that knee re-positioning can modulated in a goal-directed manner with step-wise transposition of frequency. Clinical implications are discussed with respect to joint position sense in rehabilitation settings. PMID:29568259

  17. Fixed Pattern Noise pixel-wise linear correction for crime scene imaging CMOS sensor

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Messinger, David W.; Dube, Roger R.; Ientilucci, Emmett J.

    2017-05-01

    Filtered multispectral imaging technique might be a potential method for crime scene documentation and evidence detection due to its abundant spectral information as well as non-contact and non-destructive nature. Low-cost and portable multispectral crime scene imaging device would be highly useful and efficient. The second generation crime scene imaging system uses CMOS imaging sensor to capture spatial scene and bandpass Interference Filters (IFs) to capture spectral information. Unfortunately CMOS sensors suffer from severe spatial non-uniformity compared to CCD sensors and the major cause is Fixed Pattern Noise (FPN). IFs suffer from "blue shift" effect and introduce spatial-spectral correlated errors. Therefore, Fixed Pattern Noise (FPN) correction is critical to enhance crime scene image quality and is also helpful for spatial-spectral noise de-correlation. In this paper, a pixel-wise linear radiance to Digital Count (DC) conversion model is constructed for crime scene imaging CMOS sensor. Pixel-wise conversion gain Gi,j and Dark Signal Non-Uniformity (DSNU) Zi,j are calculated. Also, conversion gain is divided into four components: FPN row component, FPN column component, defects component and effective photo response signal component. Conversion gain is then corrected to average FPN column and row components and defects component so that the sensor conversion gain is uniform. Based on corrected conversion gain and estimated image incident radiance from the reverse of pixel-wise linear radiance to DC model, corrected image spatial uniformity can be enhanced to 7 times as raw image, and the bigger the image DC value within its dynamic range, the better the enhancement.

  18. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  19. Kinematic Sunyaev-Zel'dovich Effect with Projected Fields: A Novel Probe of the Baryon Distribution with Planck, WMAP, and WISE Data.

    PubMed

    Hill, J Colin; Ferraro, Simone; Battaglia, Nick; Liu, Jia; Spergel, David N

    2016-07-29

    The kinematic Sunyaev-Zel'dovich (KSZ) effect-the Doppler boosting of cosmic microwave background (CMB) photons due to Compton scattering off free electrons with nonzero bulk velocity-probes the abundance and the distribution of baryons in the Universe. All KSZ measurements to date have explicitly required spectroscopic redshifts. Here, we implement a novel estimator for the KSZ-large-scale structure cross-correlation based on projected fields: it does not require redshift estimates for individual objects, allowing KSZ measurements from large-scale imaging surveys. We apply this estimator to cleaned CMB temperature maps constructed from Planck and WMAP data and a galaxy sample from the Wide-field Infrared Survey Explorer (WISE). We measure the KSZ effect at 3.8σ-4.5σ significance, depending on the use of additional WISE galaxy bias constraints. We verify that our measurements are robust to possible dust emission from the WISE galaxies. Assuming the standard Λ cold dark matter cosmology, we directly constrain (f_{b}/0.158)(f_{free}/1.0)=1.48±0.19 (statistical error only) at redshift z≈0.4, where f_{b} is the fraction of matter in baryonic form and f_{free} is the free electron fraction. This is the tightest KSZ-derived constraint reported to date on these parameters. Astronomers have long known that baryons do not trace dark matter on ∼ kiloparsec scales and there has been strong evidence that galaxies are baryon poor. The consistency between the f_{b} value found here and the values inferred from analyses of the primordial CMB and big bang nucleosynthesis verifies that baryons approximately trace the dark matter distribution down to ∼ megaparsec scales. While our projected-field estimator is already competitive with other KSZ approaches when applied to current data sets (because we are able to use the full-sky WISE photometric survey), it will yield enormous signal-to-noise ratios when applied to upcoming high-resolution, multifrequency CMB surveys.

  20. Kinematic Sunyaev-Zel'dovich Effect with Projected Fields: A Novel Probe of the Baryon Distribution with Planck, WMAP, and WISE Data

    NASA Astrophysics Data System (ADS)

    Hill, J. Colin; Ferraro, Simone; Battaglia, Nick; Liu, Jia; Spergel, David N.

    2016-07-01

    The kinematic Sunyaev-Zel'dovich (KSZ) effect—the Doppler boosting of cosmic microwave background (CMB) photons due to Compton scattering off free electrons with nonzero bulk velocity—probes the abundance and the distribution of baryons in the Universe. All KSZ measurements to date have explicitly required spectroscopic redshifts. Here, we implement a novel estimator for the KSZ—large-scale structure cross-correlation based on projected fields: it does not require redshift estimates for individual objects, allowing KSZ measurements from large-scale imaging surveys. We apply this estimator to cleaned CMB temperature maps constructed from Planck and WMAP data and a galaxy sample from the Wide-field Infrared Survey Explorer (WISE). We measure the KSZ effect at 3.8 σ - 4.5 σ significance, depending on the use of additional WISE galaxy bias constraints. We verify that our measurements are robust to possible dust emission from the WISE galaxies. Assuming the standard Λ cold dark matter cosmology, we directly constrain (fb/0.158 ) (ffree/1.0 ) =1.48 ±0.19 (statistical error only) at redshift z ≈0.4 , where fb is the fraction of matter in baryonic form and ffree is the free electron fraction. This is the tightest KSZ-derived constraint reported to date on these parameters. Astronomers have long known that baryons do not trace dark matter on ˜ kiloparsec scales and there has been strong evidence that galaxies are baryon poor. The consistency between the fb value found here and the values inferred from analyses of the primordial CMB and big bang nucleosynthesis verifies that baryons approximately trace the dark matter distribution down to ˜ megaparsec scales. While our projected-field estimator is already competitive with other KSZ approaches when applied to current data sets (because we are able to use the full-sky WISE photometric survey), it will yield enormous signal-to-noise ratios when applied to upcoming high-resolution, multifrequency CMB surveys.

  1. GEO Collisional Risk Assessment Based on Analysis of NASA-WISE Data and Modeling

    NASA Astrophysics Data System (ADS)

    Howard, S.; Murray-Krezan, J.; Dao, P.; Surka, D.

    From December 2009 thru 2011 the NASA Wide-Field Infrared Survey Explorer (WISE) gathered radiometrically exquisite measurements of debris in near Earth orbits, substantially augmenting the current catalog of known debris. The WISE GEO-belt debris population adds approximately 2,000 previously uncataloged objects. This paper describes characterization of the WISE GEO-belt orbital debris population in terms of location, epoch, and size. The WISE GEO-belt debris population characteristics are compared with the publically available U.S. catalog and previous descriptions of the GEO-belt debris population. We found that our results differ from previously published debris distributions, suggesting the need for updates to collision probability models and a better measurement-based understanding of the debris population. Previous studies of collisional rate in GEO invoke the presence of a large number of debris in the regime of sizes too small to track, i.e. not in the catalog, but large enough to cause significant damage and fragmentation in a collision. A common approach is to estimate that population of small debris by assuming that it is dominated by fragments and therefore should follow trends observed in fragmentation events or laboratory fragmentation tests. In other words, the population of debris can be extrapolated from trackable sizes to small sizes using an empirically determined trend of population as a function of size. We use new information suggested by the analysis of WISE IR measurements to propose an updated relationship. Our trend is an improvement because we expect that an IR emissive signature is a more reliable indicator of physical size. Based on the revised relationship, we re-estimate the total collisional rate in the GEO belt with the inclusion of projected uncatalogued debris and applying a conjunction assessment technique. Through modeling, we evaluate the hot spots near the geopotential wells and the effects of fragmentation in the GEO graveyard to the collision with GEO objects.

  2. TREATMENT INTENSIFICATION WITH INSULIN DEGLUDEC/INSULIN ASPART TWICE DAILY: RANDOMIZED STUDY TO COMPARE SIMPLE AND STEP-WISE TITRATION ALGORITHMS.

    PubMed

    Gerety, Gregg; Bebakar, Wan Mohamad Wan; Chaykin, Louis; Ozkaya, Mesut; Macura, Stanislava; Hersløv, Malene Lundgren; Behnke, Thomas

    2016-05-01

    This 26-week, multicenter, randomized, open-label, parallel-group, treat-to-target trial in adults with type 2 diabetes compared the efficacy and safety of treatment intensification algorithms with twice-daily (BID) insulin degludec/insulin aspart (IDegAsp). Patients randomized 1:1 to IDegAsp BID used either a 'Simple' algorithm (twice-weekly dose adjustments based on a single prebreakfast and pre-evening meal self-monitored plasma glucose [SMPG] measurement; IDegAsp[BIDSimple], n = 136) or a 'Stepwise' algorithm (once-weekly dose adjustments based on the lowest of 3 pre-breakfast and 3 pre-evening meal SMPG values; IDegAsp[BIDStep-wise], n = 136). After 26 weeks, mean change from baseline in glycated hemoglobin (HbA1c) with IDegAsp[BIDSimple] was noninferior to IDegAsp[BIDStep-wise] (-15 mmol/mol versus -14 mmol/mol; 95% confidence interval [CI] upper limit, <4 mmol/mol) (baseline HbA1c: 66.3 mmol/mol IDegAsp[BIDSimple] and 66.6 mmol/mol IDegAsp[BIDStep-wise]). The proportion of patients who achieved HbA1c <7.0% (<53 mmol/mol) at the end of the trial was 66.9% with IDegAsp[BIDSimple] and 62.5% with IDegAsp[BIDStep-wise]. Fasting plasma glucose levels were reduced with each titration algorithm (-1.51 mmol/L IDegAsp[BIDSimple] versus -1.95 mmol/L IDegAsp[BIDStep-wise]). Weight gain was 3.8 kg IDegAsp[BIDSimple] versus 2.6 kg IDegAsp[BIDStep-wise], and rates of overall confirmed hypoglycemia (5.16 episodes per patient-year of exposure [PYE] versus 8.93 PYE) and nocturnal confirmed hypoglycemia (0.78 PYE versus 1.33 PYE) were significantly lower with IDegAsp[BIDStep-wise] versus IDegAsp[BIDSimple]. There were no significant differences in insulin dose increments between groups. Treatment intensification with IDegAsp[BIDSimple] was noninferior to IDegAsp[BIDStep-wise]. Both titration algorithms were well tolerated; however, the more conservative step-wise algorithm led to less weight gain and fewer hypoglycemic episodes. Clinicaltrials.gov: NCT01680341.

  3. Overdensities of SMGs around WISE-selected, ultraluminous, high-redshift AGNs

    NASA Astrophysics Data System (ADS)

    Jones, Suzy F.; Blain, Andrew W.; Assef, Roberto J.; Eisenhardt, Peter; Lonsdale, Carol; Condon, James; Farrah, Duncan; Tsai, Chao-Wei; Bridge, Carrie; Wu, Jingwen; Wright, Edward L.; Jarrett, Tom

    2017-08-01

    We investigate extremely luminous dusty galaxies in the environments around Wide-field Infrared Survey Explorer (WISE)-selected hot dust-obscured galaxies (Hot DOGs) and WISE/radio-selected active galactic nuclei (AGNs) at average redshifts of z = 2.7 and 1.7, respectively. Previous observations have detected overdensities of companion submillimetre-selected sources around 10 Hot DOGs and 30 WISE/radio AGNs, with overdensities of ˜2-3 and ˜5-6, respectively. We find that the space densities in both samples to be overdense compared to normal star-forming galaxies and submillimetre galaxies (SMGs) in the Submillimetre Common-User Bolometer Array 2 (SCUBA-2) Cosmology Legacy Survey (S2CLS). Both samples of companion sources have consistent mid-infrared (mid-IR) colours and mid-IR to submm ratios as SMGs. The brighter population around WISE/radio AGNs could be responsible for the higher overdensity reported. We also find that the star formation rate densities are higher than the field, but consistent with clusters of dusty galaxies. WISE-selected AGNs appear to be good signposts for protoclusters at high redshift on arcmin scales. The results reported here provide an upper limit to the strength of angular clustering using the two-point correlation function. Monte Carlo simulations show no angular correlation, which could indicate protoclusters on scales larger than the SCUBA-2 1.5-arcmin scale maps.

  4. A Novel Four-Node Quadrilateral Smoothing Element for Stress Enhancement and Error Estimation

    NASA Technical Reports Server (NTRS)

    Tessler, A.; Riggs, H. R.; Dambach, M.

    1998-01-01

    A four-node, quadrilateral smoothing element is developed based upon a penalized-discrete-least-squares variational formulation. The smoothing methodology recovers C1-continuous stresses, thus enabling effective a posteriori error estimation and automatic adaptive mesh refinement. The element formulation is originated with a five-node macro-element configuration consisting of four triangular anisoparametric smoothing elements in a cross-diagonal pattern. This element pattern enables a convenient closed-form solution for the degrees of freedom of the interior node, resulting from enforcing explicitly a set of natural edge-wise penalty constraints. The degree-of-freedom reduction scheme leads to a very efficient formulation of a four-node quadrilateral smoothing element without any compromise in robustness and accuracy of the smoothing analysis. The application examples include stress recovery and error estimation in adaptive mesh refinement solutions for an elasticity problem and an aerospace structural component.

  5. Enhanced Condensation Heat Transfer On Patterned Surfaces

    NASA Astrophysics Data System (ADS)

    Alizadeh-Birjandi, Elaheh; Kavehpour, H. Pirouz

    2017-11-01

    Transition from film to drop wise condensation can improve the efficiency of thermal management applications and result in considerable savings in investments and operating costs by millions of dollars every year. The current methods available are either hydrophobic coating or nanostructured surfaces. The former has little adhesion to the structure which tends to detach easily under working conditions, the fabrication techniques of the latter are neither cost-effective nor scalable, and both are made with low thermal conductivity materials that would negate the heat transfer enhancement by drop wise condensation. Therefore, the existing technologies have limitations in enhancing vapor-to-liquid condensation. This work focuses on development of surfaces with wettability contrast to boost drop wise condensation, which its overall heat transfer efficiency is 2-3 times film wise condensation, while maintaining high conduction rate through the surface at low manufacturing costs. The variation in interfacial energy is achieved through crafting hydrophobic patterns to the surface of the metal via scalable fabrication techniques. The results of experimental and surface optimization studies are also presented.

  6. The Mission Operations System for Wide-field Infrared Survey Explorer (WISE)

    NASA Technical Reports Server (NTRS)

    Heinrichsen, Ingolf H.

    2006-01-01

    The goal of the Wide-field Infrared Survey Explorer (WISE) mission is to perform a highly sensitive all-sky survey in 4 wavebands from 3 to 25(mu)m. Launched on a Delta II rocket into a 500km Sun-synchronous orbit in June 2009, during its 7 months of operations, WISE will acquire about 50GBytes of raw science data every day, which will be down-linked via the TDRSS relay satellite system and processed into an astronomical catalogue and image atlas. The WISE mission operations system is being implemented in collaboration between UCLA, JPL and IPAC (Caltech). In this paper we describe the challenges to manage a high data rate, cryogenic, low earth-orbit mission; maintaining safe on-orbit operations, fast anomaly recoveries (mandated by the desire to provide complete sky coverage in a limited lifetime), production and dissemination of high quality science products, given the constraints imposed by funding profiles for small space missions.

  7. Sythesis of MCMC and Belief Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Sungsoo; Chertkov, Michael; Shin, Jinwoo

    Markov Chain Monte Carlo (MCMC) and Belief Propagation (BP) are the most popular algorithms for computational inference in Graphical Models (GM). In principle, MCMC is an exact probabilistic method which, however, often suffers from exponentially slow mixing. In contrast, BP is a deterministic method, which is typically fast, empirically very successful, however in general lacking control of accuracy over loopy graphs. In this paper, we introduce MCMC algorithms correcting the approximation error of BP, i.e., we provide a way to compensate for BP errors via a consecutive BP-aware MCMC. Our framework is based on the Loop Calculus (LC) approach whichmore » allows to express the BP error as a sum of weighted generalized loops. Although the full series is computationally intractable, it is known that a truncated series, summing up all 2-regular loops, is computable in polynomial-time for planar pair-wise binary GMs and it also provides a highly accurate approximation empirically. Motivated by this, we first propose a polynomial-time approximation MCMC scheme for the truncated series of general (non-planar) pair-wise binary models. Our main idea here is to use the Worm algorithm, known to provide fast mixing in other (related) problems, and then design an appropriate rejection scheme to sample 2-regular loops. Furthermore, we also design an efficient rejection-free MCMC scheme for approximating the full series. The main novelty underlying our design is in utilizing the concept of cycle basis, which provides an efficient decomposition of the generalized loops. In essence, the proposed MCMC schemes run on transformed GM built upon the non-trivial BP solution, and our experiments show that this synthesis of BP and MCMC outperforms both direct MCMC and bare BP schemes.« less

  8. Improving Arterial Spin Labeling by Using Deep Learning.

    PubMed

    Kim, Ki Hwan; Choi, Seung Hong; Park, Sung-Hong

    2018-05-01

    Purpose To develop a deep learning algorithm that generates arterial spin labeling (ASL) perfusion images with higher accuracy and robustness by using a smaller number of subtraction images. Materials and Methods For ASL image generation from pair-wise subtraction, we used a convolutional neural network (CNN) as a deep learning algorithm. The ground truth perfusion images were generated by averaging six or seven pairwise subtraction images acquired with (a) conventional pseudocontinuous arterial spin labeling from seven healthy subjects or (b) Hadamard-encoded pseudocontinuous ASL from 114 patients with various diseases. CNNs were trained to generate perfusion images from a smaller number (two or three) of subtraction images and evaluated by means of cross-validation. CNNs from the patient data sets were also tested on 26 separate stroke data sets. CNNs were compared with the conventional averaging method in terms of mean square error and radiologic score by using a paired t test and/or Wilcoxon signed-rank test. Results Mean square errors were approximately 40% lower than those of the conventional averaging method for the cross-validation with the healthy subjects and patients and the separate test with the patients who had experienced a stroke (P < .001). Region-of-interest analysis in stroke regions showed that cerebral blood flow maps from CNN (mean ± standard deviation, 19.7 mL per 100 g/min ± 9.7) had smaller mean square errors than those determined with the conventional averaging method (43.2 ± 29.8) (P < .001). Radiologic scoring demonstrated that CNNs suppressed noise and motion and/or segmentation artifacts better than the conventional averaging method did (P < .001). Conclusion CNNs provided superior perfusion image quality and more accurate perfusion measurement compared with those of the conventional averaging method for generation of ASL images from pair-wise subtraction images. © RSNA, 2017.

  9. An optimization-based framework for anisotropic simplex mesh adaptation

    NASA Astrophysics Data System (ADS)

    Yano, Masayuki; Darmofal, David L.

    2012-09-01

    We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.

  10. Integrated, Step-Wise, Mass-Isotopomeric Flux Analysis of the TCA Cycle.

    PubMed

    Alves, Tiago C; Pongratz, Rebecca L; Zhao, Xiaojian; Yarborough, Orlando; Sereda, Sam; Shirihai, Orian; Cline, Gary W; Mason, Graeme; Kibbey, Richard G

    2015-11-03

    Mass isotopomer multi-ordinate spectral analysis (MIMOSA) is a step-wise flux analysis platform to measure discrete glycolytic and mitochondrial metabolic rates. Importantly, direct citrate synthesis rates were obtained by deconvolving the mass spectra generated from [U-(13)C6]-D-glucose labeling for position-specific enrichments of mitochondrial acetyl-CoA, oxaloacetate, and citrate. Comprehensive steady-state and dynamic analyses of key metabolic rates (pyruvate dehydrogenase, β-oxidation, pyruvate carboxylase, isocitrate dehydrogenase, and PEP/pyruvate cycling) were calculated from the position-specific transfer of (13)C from sequential precursors to their products. Important limitations of previous techniques were identified. In INS-1 cells, citrate synthase rates correlated with both insulin secretion and oxygen consumption. Pyruvate carboxylase rates were substantially lower than previously reported but showed the highest fold change in response to glucose stimulation. In conclusion, MIMOSA measures key metabolic rates from the precursor/product position-specific transfer of (13)C-label between metabolites and has broad applicability to any glucose-oxidizing cell. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. The WIRED Survey. 2; Infrared Excesses in the SDSS DR7 White Dwarf Catalog

    NASA Technical Reports Server (NTRS)

    Debes, John H.; Hoard, D. W.; Wachter, Stefanie; Leisawitz, David T.; Cohen, Martin

    2011-01-01

    With the launch of the Wide-field Infrar.ed Survey Explorer (WISE), a new era of detecting planetary debris and brown dwarfs (BDs) around white dwarfs (WDs) has begun with the WISE InfraRed Excesses around Degenerates (WIRED) Survey. The WIRED Survey is sensitive to substellar objects and dusty debris around WDs out to distances exceeding 100 pc, well beyond the completeness level of local WDs. In this paper, we present a cross-correlation of the preliminary Sloan Digital Sky Survey (SDSS) Data Release 7 (DR7) WD catalog between the WISE, Two-Micron All Sky Survey (2MASS), UKIRT Infrared Deep Sky Survey (UKIDSS), and SDSS DR7 photometric catalogs. From -18,000 input targets, there are WISE detections comprising 344 "naked" WDs (detection of the WD photosphere only), 1020 candidate WD+M dwarf binaries, 42 candidate WD+BD systems, 52 candidate WD+dust disk systems, and 69 targets with indeterminate infrared excess. We classified all of the detected targets through spectral energy distribution model fitting of the merged optical, near-IR, and WISE photometry. Some of these detections could be the result of contaminating sources within the large (approx. 6") WISE point-spread function; we make a preliminary estimate for the rates of contamination for our WD+BD and WD+disk candidates and provide notes for each target of interest. Each candidate presented here should be confirmed with higher angular resolution infrared imaging or infrared spectroscopy. We also present an overview of the observational characteristics of the detected WDs in the WISE photometric bands, including the relative frequencies of candidate WD+M, WD+BD, and WD+disk systems.

  12. Step-wise loss of antidepressant effectiveness with repeated antidepressant trials in bipolar II depression.

    PubMed

    Amsterdam, Jay D; Lorenzo-Luaces, Lorenzo; DeRubeis, Robert J

    2016-11-01

    This study examined the relationship between the number of prior antidepressant treatment trials and step-wise increase in pharmacodynamic tolerance (or progressive loss of effectiveness) in subjects with bipolar II depression. Subjects ≥18 years old with bipolar II depression (n=129) were randomized to double-blind venlafaxine or lithium carbonate monotherapy for 12 weeks. Responders (n=59) received continuation monotherapy for six additional months. After controlling for baseline covariates of prior medications, there was a 25% reduction in the likelihood of response to treatment with each increase in the number of prior antidepressant trials (odds ratio [OR]=0.75, unstandardized coefficient [B]=-0.29, standard error (SE)=0.12; χ 2 =5.70, P<.02], as well as a 32% reduction in the likelihood of remission with each prior antidepressant trial (OR=0.68, B=-0.39, SE=0.13; χ 2 =9.71, P=.002). This step-wise increase in pharmacodynamic tolerance occurred in both treatment conditions. Prior selective serotonin reuptake inhibitor (SSRI) therapy was specifically associated with a step-wise increase in tolerance, whereas other prior antidepressants or mood stabilizers were not associated with pharmacodynamic tolerance. Neither the number of prior antidepressants, nor the number of prior SSRIs, or mood stabilizers, were associated with an increase in relapse during continuation therapy. The odds of responding or remitting during venlafaxine or lithium monotherapy were reduced by 25% and 32%, respectively, with each increase in the number of prior antidepressant treatment trials. There was no relationship between prior antidepressant exposure and depressive relapse during continuation therapy of bipolar II disorder. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. The calibration of the WISE W1 and W2 Tully-Fisher relation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neill, J. D.; Seibert, Mark; Scowcroft, Victoria

    2014-09-10

    In order to explore local large-scale structures and velocity fields, accurate galaxy distance measures are needed. We now extend the well-tested recipe for calibrating the correlation between galaxy rotation rates and luminosities—capable of providing such distance measures—to the all-sky, space-based imaging data from the Wide-field Infrared Survey Explorer (WISE) W1 (3.4 μm) and W2 (4.6 μm) filters. We find a correlation of line width to absolute magnitude (known as the Tully-Fisher relation, TFR) of M{sub W1}{sup b,i,k,a}=−20.35−9.56(log W{sub mx}{sup i}−2.5) (0.54 mag rms) and M{sub W2}{sup b,i,k,a}=−19.76−9.74(log W{sub mx}{sup i}−2.5) (0.56 mag rms) from 310 galaxies in 13 clusters. We update themore » I-band TFR using a sample 9% larger than in Tully and Courtois. We derive M{sub I}{sup b,i,k}=−21.34−8.95(log W{sub mx}{sup i}−2.5) (0.46 mag rms). The WISE TFRs show evidence of curvature. Quadratic fits give M{sub W1}{sup b,i,k,a}=−20.48−8.36(log W{sub mx}{sup i}−2.5)+3.60(log W{sub mx}{sup i}−2.5){sup 2} (0.52 mag rms) and M{sub W2}{sup b,i,k,a}=−19.91−8.40(log W{sub mx}{sup i}−2.5)+4.32(log W{sub mx}{sup i}−2.5){sup 2} (0.55 mag rms). We apply an I-band –WISE color correction to lower the scatter and derive M{sub C{sub W{sub 1}}}=−20.22−9.12(log W{sub mx}{sup i}−2.5) and M{sub C{sub W{sub 2}}}=−19.63−9.11(log W{sub mx}{sup i}−2.5) (both 0.46 mag rms). Using our three independent TFRs (W1 curved, W2 curved, and I band), we calibrate the UNION2 Type Ia supernova sample distance scale and derive H {sub 0} = 74.4 ± 1.4(stat) ± 2.4(sys) km s{sup –1} Mpc{sup –1} with 4% total error.« less

  14. A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals.

    PubMed

    Gold, Nathan; Frasch, Martin G; Herry, Christophe L; Richardson, Bryan S; Wang, Xiaogang

    2017-01-01

    Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.

  15. A Fast and Accurate Algorithm for l1 Minimization Problems in Compressive Sampling (Preprint)

    DTIC Science & Technology

    2013-01-22

    However, updating uk+1 via the formulation of Step 2 in Algorithm 1 can be implemented through the use of the component-wise Gauss - Seidel iteration which...may accelerate the rate of convergence of the algorithm and therefore reduce the total CPU-time consumed. The efficiency of component-wise Gauss - Seidel ...Micchelli, L. Shen, and Y. Xu, A proximity algorithm accelerated by Gauss - Seidel iterations for L1/TV denoising models, Inverse Problems, 28 (2012), p

  16. Referral Coordination in the Next TRICARE Contract Environment: A Case Study Applying Failure Mode Effects Analysis

    DTIC Science & Technology

    2004-06-13

    antiquity. Plutarch is credited for saying in Morals--Against Colotes the Epicurean, "For to err in opinion, though it be not the part of wise men, it is at...least human" ( Plutarch , AD 110). Of the 5 definitions for error given in Merriam-Webster’s Collegiate Dictionary, the third one listed "an act that...Identifying and managing inappropriate hospital utilization: A policy synthesis. Health Services Research, 22(5), 710-57. Plutarch . (AD 110) . Worldofquotes

  17. A comparison of two worlds: How does Bayes hold up to the status quo for the analysis of clinical trials?

    PubMed

    Pressman, Alice R; Avins, Andrew L; Hubbard, Alan; Satariano, William A

    2011-07-01

    There is a paucity of literature comparing Bayesian analytic techniques with traditional approaches for analyzing clinical trials using real trial data. We compared Bayesian and frequentist group sequential methods using data from two published clinical trials. We chose two widely accepted frequentist rules, O'Brien-Fleming and Lan-DeMets, and conjugate Bayesian priors. Using the nonparametric bootstrap, we estimated a sampling distribution of stopping times for each method. Because current practice dictates the preservation of an experiment-wise false positive rate (Type I error), we approximated these error rates for our Bayesian and frequentist analyses with the posterior probability of detecting an effect in a simulated null sample. Thus for the data-generated distribution represented by these trials, we were able to compare the relative performance of these techniques. No final outcomes differed from those of the original trials. However, the timing of trial termination differed substantially by method and varied by trial. For one trial, group sequential designs of either type dictated early stopping of the study. In the other, stopping times were dependent upon the choice of spending function and prior distribution. Results indicate that trialists ought to consider Bayesian methods in addition to traditional approaches for analysis of clinical trials. Though findings from this small sample did not demonstrate either method to consistently outperform the other, they did suggest the need to replicate these comparisons using data from varied clinical trials in order to determine the conditions under which the different methods would be most efficient. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. A comparison of two worlds: How does Bayes hold up to the status quo for the analysis of clinical trials?

    PubMed Central

    Pressman, Alice R.; Avins, Andrew L.; Hubbard, Alan; Satariano, William A.

    2014-01-01

    Background There is a paucity of literature comparing Bayesian analytic techniques with traditional approaches for analyzing clinical trials using real trial data. Methods We compared Bayesian and frequentist group sequential methods using data from two published clinical trials. We chose two widely accepted frequentist rules, O'Brien–Fleming and Lan–DeMets, and conjugate Bayesian priors. Using the nonparametric bootstrap, we estimated a sampling distribution of stopping times for each method. Because current practice dictates the preservation of an experiment-wise false positive rate (Type I error), we approximated these error rates for our Bayesian and frequentist analyses with the posterior probability of detecting an effect in a simulated null sample. Thus for the data-generated distribution represented by these trials, we were able to compare the relative performance of these techniques. Results No final outcomes differed from those of the original trials. However, the timing of trial termination differed substantially by method and varied by trial. For one trial, group sequential designs of either type dictated early stopping of the study. In the other, stopping times were dependent upon the choice of spending function and prior distribution. Conclusions Results indicate that trialists ought to consider Bayesian methods in addition to traditional approaches for analysis of clinical trials. Though findings from this small sample did not demonstrate either method to consistently outperform the other, they did suggest the need to replicate these comparisons using data from varied clinical trials in order to determine the conditions under which the different methods would be most efficient. PMID:21453792

  19. Lameness detection challenges in automated milking systems addressed with partial least squares discriminant analysis.

    PubMed

    Garcia, E; Klaas, I; Amigo, J M; Bro, R; Enevoldsen, C

    2014-12-01

    Lameness causes decreased animal welfare and leads to higher production costs. This study explored data from an automatic milking system (AMS) to model on-farm gait scoring from a commercial farm. A total of 88 cows were gait scored once per week, for 2 5-wk periods. Eighty variables retrieved from AMS were summarized week-wise and used to predict 2 defined classes: nonlame and clinically lame cows. Variables were represented with 2 transformations of the week summarized variables, using 2-wk data blocks before gait scoring, totaling 320 variables (2 × 2 × 80). The reference gait scoring error was estimated in the first week of the study and was, on average, 15%. Two partial least squares discriminant analysis models were fitted to parity 1 and parity 2 groups, respectively, to assign the lameness class according to the predicted probability of being lame (score 3 or 4/4) or not lame (score 1/4). Both models achieved sensitivity and specificity values around 80%, both in calibration and cross-validation. At the optimum values in the receiver operating characteristic curve, the false-positive rate was 28% in the parity 1 model, whereas in the parity 2 model it was about half (16%), which makes it more suitable for practical application; the model error rates were, 23 and 19%, respectively. Based on data registered automatically from one AMS farm, we were able to discriminate nonlame and lame cows, where partial least squares discriminant analysis achieved similar performance to the reference method. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  20. gsSKAT: Rapid gene set analysis and multiple testing correction for rare-variant association studies using weighted linear kernels.

    PubMed

    Larson, Nicholas B; McDonnell, Shannon; Cannon Albright, Lisa; Teerlink, Craig; Stanford, Janet; Ostrander, Elaine A; Isaacs, William B; Xu, Jianfeng; Cooney, Kathleen A; Lange, Ethan; Schleutker, Johanna; Carpten, John D; Powell, Isaac; Bailey-Wilson, Joan E; Cussenot, Olivier; Cancel-Tassin, Geraldine; Giles, Graham G; MacInnis, Robert J; Maier, Christiane; Whittemore, Alice S; Hsieh, Chih-Lin; Wiklund, Fredrik; Catalona, William J; Foulkes, William; Mandal, Diptasri; Eeles, Rosalind; Kote-Jarai, Zsofia; Ackerman, Michael J; Olson, Timothy M; Klein, Christopher J; Thibodeau, Stephen N; Schaid, Daniel J

    2017-05-01

    Next-generation sequencing technologies have afforded unprecedented characterization of low-frequency and rare genetic variation. Due to low power for single-variant testing, aggregative methods are commonly used to combine observed rare variation within a single gene. Causal variation may also aggregate across multiple genes within relevant biomolecular pathways. Kernel-machine regression and adaptive testing methods for aggregative rare-variant association testing have been demonstrated to be powerful approaches for pathway-level analysis, although these methods tend to be computationally intensive at high-variant dimensionality and require access to complete data. An additional analytical issue in scans of large pathway definition sets is multiple testing correction. Gene set definitions may exhibit substantial genic overlap, and the impact of the resultant correlation in test statistics on Type I error rate control for large agnostic gene set scans has not been fully explored. Herein, we first outline a statistical strategy for aggregative rare-variant analysis using component gene-level linear kernel score test summary statistics as well as derive simple estimators of the effective number of tests for family-wise error rate control. We then conduct extensive simulation studies to characterize the behavior of our approach relative to direct application of kernel and adaptive methods under a variety of conditions. We also apply our method to two case-control studies, respectively, evaluating rare variation in hereditary prostate cancer and schizophrenia. Finally, we provide open-source R code for public use to facilitate easy application of our methods to existing rare-variant analysis results. © 2017 WILEY PERIODICALS, INC.

  1. Application of MUSLE for the prediction of phosphorus losses.

    PubMed

    Noor, Hamze; Mirnia, Seyed Khalagh; Fazli, Somaye; Raisi, Mohamad Bagher; Vafakhah, Mahdi

    2010-01-01

    Soil erosion in forestlands affects not only land productivity but also the water body down stream. The Universal Soil Loss Equation (USLE) has been applied broadly for the prediction of soil loss from upland fields. However, there are few reports concerning the prediction of nutrient (P) losses based on the USLE and its versions. The present study was conducted to evaluate the applicability of the deterministic model Modified Universal Soil Loss Equation (MUSLE) to estimation of phosphorus losses in the Kojor forest watershed, northern Iran. The model was tested and calibrated using accurate continuous P loss data collected during seven storm events in 2008. Results of the original model simulations for storm-wise P loss did not match the observed data, while the revised version of the model could imitate the observed values well. The results of the study approved the efficient application of the revised MUSLE in estimating storm-wise P losses in the study area with a high level of agreement of beyond 93%, an acceptable estimation error of some 35%.

  2. VizieR Online Data Catalog: New IR photometric study of Ap and Am stars (Chen+, 2017)

    NASA Astrophysics Data System (ADS)

    Chen, P. S.; Liu, J. Y.; Shan, H. G.

    2018-05-01

    In the General Catalog of Ap and Am stars (Renson & Manfroid 2009, Cat. III/260) 8265 stars are included in which, as Renson & Manfroid (2009, Cat. III/260) described, only 426 stars are of the "well known confirmed sample". We take these 426 stars as our working sample. The cross-identifications of 2MASS/WISE counterparts for all Ap, Am, and HgMn stars listed in this paper are made from Cutri et al. (2012, Cat. II/311) by using the radius of 2 arcsec. All 426 Ap, Am, and HgMn stars have 2MASS and/or WISE counterparts, which are listed in Table 3. The cross-identifications of IRAS counterparts are made according to the positional error ellipse of the source, because it has a 95% confidence level (IRAS Explanatory Supplement, Beichman et al. 1988, Cat. II/274). Finally, 202 stars are found to have the IRAS counterparts from IRAS PSC/FSC, which is listed in Table 4. (5 data files).

  3. Non-Parametric Collision Probability for Low-Velocity Encounters

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    2007-01-01

    An implicit, but not necessarily obvious, assumption in all of the current techniques for assessing satellite collision probability is that the relative position uncertainty is perfectly correlated in time. If there is any mis-modeling of the dynamics in the propagation of the relative position error covariance matrix, time-wise de-correlation of the uncertainty will increase the probability of collision over a given time interval. The paper gives some examples that illustrate this point. This paper argues that, for the present, Monte Carlo analysis is the best available tool for handling low-velocity encounters, and suggests some techniques for addressing the issues just described. One proposal is for the use of a non-parametric technique that is widely used in actuarial and medical studies. The other suggestion is that accurate process noise models be used in the Monte Carlo trials to which the non-parametric estimate is applied. A further contribution of this paper is a description of how the time-wise decorrelation of uncertainty increases the probability of collision.

  4. A participatory learning approach to biochemistry using student authored and evaluated multiple-choice questions.

    PubMed

    Bottomley, Steven; Denny, Paul

    2011-01-01

    A participatory learning approach, combined with both a traditional and a competitive assessment, was used to motivate students and promote a deep approach to learning biochemistry. Students were challenged to research, author, and explain their own multiple-choice questions (MCQs). They were also required to answer, evaluate, and discuss MCQs written by their peers. The technology used to support this activity was PeerWise--a freely available, innovative web-based system that supports students in the creation of an annotated question repository. In this case study, we describe students' contributions to, and perceptions of, the PeerWise system for a cohort of 107 second-year biomedical science students from three degree streams studying a core biochemistry subject. Our study suggests that the students are eager participants and produce a large repository of relevant, good quality MCQs. In addition, they rate the PeerWise system highly and use higher order thinking skills while taking an active role in their learning. We also discuss potential issues and future work using PeerWise for biomedical students. Copyright © 2011 Wiley Periodicals, Inc.

  5. MRI-based intelligence quotient (IQ) estimation with sparse learning.

    PubMed

    Wang, Liye; Wee, Chong-Yaw; Suk, Heung-Il; Tang, Xiaoying; Shen, Dinggang

    2015-01-01

    In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject's IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge.

  6. Image Processing Of Images From Peripheral-Artery Digital Subtraction Angiography (DSA) Studies

    NASA Astrophysics Data System (ADS)

    Wilson, David L.; Tarbox, Lawrence R.; Cist, David B.; Faul, David D.

    1988-06-01

    A system is being developed to test the possibility of doing peripheral, digital subtraction angiography (DSA) with a single contrast injection using a moving gantry system. Given repositioning errors that occur between the mask and contrast-containing images, factors affecting the success of subtractions following image registration have been investigated theoretically and experimentally. For a 1 mm gantry displacement, parallax and geometric image distortion (pin-cushion) both give subtraction errors following registration that are approximately 25% of the error resulting from no registration. Image processing techniques improve the subtractions. The geometric distortion effect is reduced using a piece-wise, 8 parameter unwarping method. Plots of image similarity measures versus pixel shift are well behaved and well fit by a parabola, leading to the development of an iterative, automatic registration algorithm that uses parabolic prediction of the new minimum. The registration algorithm converges quickly (less than 1 second on a MicroVAX) and is relatively immune to the region of interest (ROI) selected.

  7. Performance metrics for the assessment of satellite data products: an ocean color case study

    PubMed Central

    Seegers, Bridget N.; Stumpf, Richard P.; Schaeffer, Blake A.; Loftin, Keith A.; Werdell, P. Jeremy

    2018-01-01

    Performance assessment of ocean color satellite data has generally relied on statistical metrics chosen for their common usage and the rationale for selecting certain metrics is infrequently explained. Commonly reported statistics based on mean squared errors, such as the coefficient of determination (r2), root mean square error, and regression slopes, are most appropriate for Gaussian distributions without outliers and, therefore, are often not ideal for ocean color algorithm performance assessment, which is often limited by sample availability. In contrast, metrics based on simple deviations, such as bias and mean absolute error, as well as pair-wise comparisons, often provide more robust and straightforward quantities for evaluating ocean color algorithms with non-Gaussian distributions and outliers. This study uses a SeaWiFS chlorophyll-a validation data set to demonstrate a framework for satellite data product assessment and recommends a multi-metric and user-dependent approach that can be applied within science, modeling, and resource management communities. PMID:29609296

  8. THE NEOWISE-DISCOVERED COMET POPULATION AND THE CO + CO{sub 2} PRODUCTION RATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, James M.; Stevenson, Rachel; Kramer, Emily

    2015-12-01

    The 163 comets observed during the WISE/NEOWISE prime mission represent the largest infrared survey to date of comets, providing constraints on dust, nucleus size, and CO + CO{sub 2} production. We present detailed analyses of the WISE/NEOWISE comet discoveries, and discuss observations of the active comets showing 4.6 μm band excess. We find a possible relation between dust and CO + CO{sub 2} production, as well as possible differences in the sizes of long and short period comet nuclei.

  9. Prevalence of ketosis in dairy cows in milk shed areas of Odisha state, India.

    PubMed

    Biswal, Sangram; Nayak, Dhruba Charan; Sardar, Kautuk Kumar

    2016-11-01

    The present study was conducted to ascertain the prevalence of ketosis in dairy cows in dairy herds, milksheds, and mixed population of milk cows selected randomly in milkshed areas of Odisha state, India. The investigation was conducted in 280 private dairy herds with variable herd size of 10-15 cows comprising crossbred Jersey cows (CBJ), crossbred Holstein Friesian (CHF) cows, and indigenous local breeds. The analysis of urine (Rothera's test), milk (Ross test), and blood samples of 2760 test cows were conducted through qualitative assessment by Rothera's test and Ross test, respectively, for the presence of ketone bodies to screen the ketotic animals. Cut-points have been decided based on β-hydroxybutyric acid level (≥1.2-1.4 mmol/L) in milk. We noted positive cases of ketosis with a prevalence rate of 36.7% (1014/2760) entailing 27.2% in clinical ketosis and 9.6% in subclinical ketosis. The breed wise incident rate was recorded to be the highest (38.0%) in CBJs. The age-wise prevalence rate was found to be the highest (40.8%) in the age group of 5.5-6.5 years. The season wise prevalence rate in 5 th calver was recorded to be the highest (38.6%) in summer season as compared to other seasons. The prevalence of ketosis was observed to be the highest at 56.7% on the first stage of lactation at the 1 st month after 2 weeks. The incidence rates for clinical and subclinical ketosis were found to be 25.2%; 12.2%, 26.6%; 11.2% and 30.3%; 2.9% in CBJ, CHF and indigenous cows, respectively. The breed wise overall prevalence rate was recorded to be 38.0% in CBJ, 37.8% in CHF, and 33.2% in indigenous cows. Ketosis and subclinical ketosis is highly prevalent metabolic disorder and has severe effect on the production status of affected animal and needs to be prevented, rather than treated, by maintaining cows in good and healthy conditions. We have attempted to give great attention for diagnosis, management, and control of this disease during risk stage to prevent economic loss sustained by the dairy farmers of Eastern India.

  10. Prevalence of ketosis in dairy cows in milk shed areas of Odisha state, India

    PubMed Central

    Biswal, Sangram; Nayak, Dhruba Charan; Sardar, Kautuk Kumar

    2016-01-01

    Aim: The present study was conducted to ascertain the prevalence of ketosis in dairy cows in dairy herds, milksheds, and mixed population of milk cows selected randomly in milkshed areas of Odisha state, India. Materials and Methods: The investigation was conducted in 280 private dairy herds with variable herd size of 10-15 cows comprising crossbred Jersey cows (CBJ), crossbred Holstein Friesian (CHF) cows, and indigenous local breeds. The analysis of urine (Rothera’s test), milk (Ross test), and blood samples of 2760 test cows were conducted through qualitative assessment by Rothera’s test and Ross test, respectively, for the presence of ketone bodies to screen the ketotic animals. Cut-points have been decided based on β-hydroxybutyric acid level (≥1.2-1.4 mmol/L) in milk. Results: We noted positive cases of ketosis with a prevalence rate of 36.7% (1014/2760) entailing 27.2% in clinical ketosis and 9.6% in subclinical ketosis. The breed wise incident rate was recorded to be the highest (38.0%) in CBJs. The age-wise prevalence rate was found to be the highest (40.8%) in the age group of 5.5-6.5 years. The season wise prevalence rate in 5th calver was recorded to be the highest (38.6%) in summer season as compared to other seasons. The prevalence of ketosis was observed to be the highest at 56.7% on the first stage of lactation at the 1st month after 2 weeks. The incidence rates for clinical and subclinical ketosis were found to be 25.2%; 12.2%, 26.6%; 11.2% and 30.3%; 2.9% in CBJ, CHF and indigenous cows, respectively. The breed wise overall prevalence rate was recorded to be 38.0% in CBJ, 37.8% in CHF, and 33.2% in indigenous cows. Conclusion: Ketosis and subclinical ketosis is highly prevalent metabolic disorder and has severe effect on the production status of affected animal and needs to be prevented, rather than treated, by maintaining cows in good and healthy conditions. We have attempted to give great attention for diagnosis, management, and control of this disease during risk stage to prevent economic loss sustained by the dairy farmers of Eastern India. PMID:27956776

  11. Simultaneous optimization method for absorption spectroscopy postprocessing.

    PubMed

    Simms, Jean M; An, Xinliang; Brittelle, Mack S; Ramesh, Varun; Ghandhi, Jaal B; Sanders, Scott T

    2015-05-10

    A simultaneous optimization method is proposed for absorption spectroscopy postprocessing. This method is particularly useful for thermometry measurements based on congested spectra, as commonly encountered in combustion applications of H2O absorption spectroscopy. A comparison test demonstrated that the simultaneous optimization method had greater accuracy, greater precision, and was more user-independent than the common step-wise postprocessing method previously used by the authors. The simultaneous optimization method was also used to process experimental data from an environmental chamber and a constant volume combustion chamber, producing results with errors on the order of only 1%.

  12. The First Hyper-Luminous Infrared Galaxy Discovered by WISE

    NASA Technical Reports Server (NTRS)

    Eisenhardt, Peter R.; Wu, Jingwen; Tsai, Chao-Wei; Assef, Roberto; Benford, Dominic; Blain, Andrew; Bridge, Carrie; Condon, J. J.; Cushing, Michael C.; Cutri, Roc; hide

    2012-01-01

    We report the discovery by the Wide-field Infrared Survey Explorer of the z = 2.452 source WISEJ181417.29+341224.9, the first hyperluminous source found in the WISE survey. WISE 1814+3412 is also the prototype for an all-sky sample of approximately 1000 extremely luminous "W1W2-dropouts" (sources faint or undetected by WISE at 3.4 and 4.6 micrometers and well detected at 12 or 22 micrometers). The WISE data and a 350 micrometers detection give a minimum bolometric luminosity of 3.7 x 10(exp 13) solar luminosity, with approximately 10(exp 14) solar luminosity plausible. Followup images reveal four nearby sources: a QSO and two Lyman Break Galaxies (LBGs) at z = 2.45, and an M dwarf star. The brighter LBG dominates the bolometric emission. Gravitational lensing is unlikely given the source locations and their different spectra and colors. The dominant LBG spectrum indicates a star formation rate approximately 300 solar mass yr(exp -1), accounting for less than or equal to 10 percent of the bolometric luminosity. Strong 22 micrometer emission relative to 350 micrometer implies that warm dust contributes significantly to the luminosity, while cooler dust normally associated with starbursts is constrained by an upper limit at 1.1 mm. Radio emission is approximately 10? above the far-infrared/radio correlation, indicating an active galactic nucleus is present. An obscured AGN combined with starburst and evolved stellar components can account for the observations. If the black hole mass follows the local MBH-bulge mass relation, the implied Eddington ratio is approximately greater than 4. WISE 1814+3412 may be a heavily obscured object where the peak AGN activity occurred prior to the peak era of star formation.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masiero, Joseph R.; Mainzer, A. K.; Bauer, J. M.

    We present initial results from the Wide-field Infrared Survey Explorer (WISE), a four-band all-sky thermal infrared survey that produces data well suited for measuring the physical properties of asteroids, and the NEOWISE enhancement to the WISE mission allowing for detailed study of solar system objects. Using a NEATM thermal model fitting routine, we compute diameters for over 100,000 Main Belt asteroids from their IR thermal flux, with errors better than 10%. We then incorporate literature values of visible measurements (in the form of the H absolute magnitude) to determine albedos. Using these data we investigate the albedo and diameter distributionsmore » of the Main Belt. As observed previously, we find a change in the average albedo when comparing the inner, middle, and outer portions of the Main Belt. We also confirm that the albedo distribution of each region is strongly bimodal. We observe groupings of objects with similar albedos in regions of the Main Belt associated with dynamical breakup families. Asteroid families typically show a characteristic albedo for all members, but there are notable exceptions to this. This paper is the first look at the Main Belt asteroids in the WISE data, and only represents the preliminary, observed raw size, and albedo distributions for the populations considered. These distributions are subject to survey biases inherent to the NEOWISE data set and cannot yet be interpreted as describing the true populations; the debiased size and albedo distributions will be the subject of the next paper in this series.« less

  14. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  15. If You Don’t Find It Often, You Often Don’t Find It: Why Some Cancers Are Missed in Breast Cancer Screening

    PubMed Central

    Evans, Karla K.; Birdwell, Robyn L.; Wolfe, Jeremy M.

    2013-01-01

    Mammography is an important tool in the early detection of breast cancer. However, the perceptual task is difficult and a significant proportion of cancers are missed. Visual search experiments show that miss (false negative) errors are elevated when targets are rare (low prevalence) but it is unknown if low prevalence is a significant factor under real world, clinical conditions. Here we show that expert mammographers in a real, low-prevalence, clinical setting, miss a much higher percentage of cancers than are missed when the mammographers search for the same cancers under high prevalence conditions. We inserted 50 positive and 50 negative cases into the normal workflow of the breast cancer screening service of an urban hospital over the course of nine months. This rate was slow enough not to markedly raise disease prevalence in the radiologists’ daily practice. Six radiologists subsequently reviewed all 100 cases in a session where the prevalence of disease was 50%. In the clinical setting, participants missed 30% of the cancers. In the high prevalence setting, participants missed just 12% of the same cancers. Under most circumstances, this low prevalence effect is probably adaptive. It is usually wise to be conservative about reporting events with very low base rates (Was that a flying saucer? Probably not.). However, while this response to low prevalence appears to be strongly engrained in human visual search mechanisms, it may not be as adaptive in socially important, low prevalence tasks like medical screening. While the results of any one study must be interpreted cautiously, these data are consistent with the conclusion that this behavioral response to low prevalence could be a substantial contributor to miss errors in breast cancer screening. PMID:23737980

  16. Exploring effective multiplicity in multichannel functional near-infrared spectroscopy using eigenvalues of correlation matrices

    PubMed Central

    Uga, Minako; Dan, Ippeita; Dan, Haruka; Kyutoku, Yasushi; Taguchi, Y-h; Watanabe, Eiju

    2015-01-01

    Abstract. Recent advances in multichannel functional near-infrared spectroscopy (fNIRS) allow wide coverage of cortical areas while entailing the necessity to control family-wise errors (FWEs) due to increased multiplicity. Conventionally, the Bonferroni method has been used to control FWE. While Type I errors (false positives) can be strictly controlled, the application of a large number of channel settings may inflate the chance of Type II errors (false negatives). The Bonferroni-based methods are especially stringent in controlling Type I errors of the most activated channel with the smallest p value. To maintain a balance between Types I and II errors, effective multiplicity (Meff) derived from the eigenvalues of correlation matrices is a method that has been introduced in genetic studies. Thus, we explored its feasibility in multichannel fNIRS studies. Applying the Meff method to three kinds of experimental data with different activation profiles, we performed resampling simulations and found that Meff was controlled at 10 to 15 in a 44-channel setting. Consequently, the number of significantly activated channels remained almost constant regardless of the number of measured channels. We demonstrated that the Meff approach can be an effective alternative to Bonferroni-based methods for multichannel fNIRS studies. PMID:26157982

  17. Adaptive evolution of Escherichia coli to Ciprofloxacin in controlled stress environments: emergence of resistance in continuous and step-wise gradients

    NASA Astrophysics Data System (ADS)

    Deng, J.; Zhou, L.; Dong, Y.; Sanford, R. A.; Shechtman, L. A.; Alcalde, R.; Werth, C. J.; Fouke, B. W.

    2017-12-01

    Microorganisms in nature have evolved in response to a variety of environmental stresses, including gradients in pH, flow and chemistry. While environmental stresses are generally considered to be the driving force of adaptive evolution, the impact and extent of any specific stress needed to drive such changes has not been well characterized. In this study, a microfluidic diffusion chamber (MDC) and a batch culturing system were used to systematically study the effects of continuous versus step-wise stress increments on adaptation of E. coli to the antibiotic ciprofloxacin. In the MDC, a diffusion gradient of ciprofloxacin was established across a microfluidic well array to microscopically observe changes in Escherichia coli strain 307 replication and migration patterns that would indicate emergence of resistance due to genetic mutations. Cells recovered from the MDC only had resistance of 50-times the original minimum inhibition concentration (MICoriginal) of ciprofloxacin, although minimum exposure concentrations were over 80 × MICoriginal by the end of the experiment. In complementary batch experiments, E. coli 307 were exposed to step-wise daily increases of ciprofloxacin at rates equivalent to 0.1×, 0.2×, 0.4× or 0.8× times MICoriginal/day. Over a period of 18 days, E. coli cells were able to acquire resistance of up to 225 × MICoriginal, with exposure to ciprofloxacin concentration up to only 14.9 × MIC­original. The different levels of acquired resistance in the continuous MDC versus step-wise batch increment experiments suggests that the intrinsic rate of E. coli adaptation was exceeded in the MDC, while the step-wise experiments favor adaptation to the highest ciprofloxacin experiments. Genomic analyses of E. coli DNA extracted from the microfluidic cell and batch cultures indicated four single nucleotide polymorphism (SNP) mutations of amino acid 82, 83 and 87 in the gyrA gene. The progression of adaptation in the step-wise increments of ciprofloxacin indicate that the Ser83-Leu mutation gradually becomes dominant over other gyrA mutations with increased antibiotic resistance. Co-existence of the Ser83-Leu and Asp87—Gly mutations appear to provide the greatest level of resistance (i.e., 85 × to 225 × MICoriginal), and only emerged after the whole community acquired the Ser83—Leu mutation.

  18. Stream-wise distribution of skin-friction drag reduction on a flat plate with bubble injection

    NASA Astrophysics Data System (ADS)

    Qin, Shijie; Chu, Ning; Yao, Yan; Liu, Jingting; Huang, Bin; Wu, Dazhuan

    2017-03-01

    To investigate the stream-wise distribution of skin-friction drag reduction on a flat plate with bubble injection, both experiments and simulations of bubble drag reduction (BDR) have been conducted in this paper. Drag reductions at various flow speeds and air injection rates have been tested in cavitation tunnel experiments. Visualization of bubble flow pattern is implemented synchronously. The computational fluid dynamics (CFD) method, in the framework of Eulerian-Eulerian two fluid modeling, coupled with population balance model (PBM) is used to simulate the bubbly flow along the flat plate. A wide range of bubble sizes considering bubble breakup and coalescence is modeled based on experimental bubble distribution images. Drag and lift forces are fully modeled based on applicable closure models. Both predicted drag reductions and bubble distributions are in reasonable concordance with experimental results. Stream-wise distribution of BDR is revealed based on CFD-PBM numerical results. In particular, four distinct regions with different BDR characteristics are first identified and discussed in this study. Thresholds between regions are extracted and discussed. And it is highly necessary to fully understand the stream-wise distribution of BDR in order to establish a universal scaling law. Moreover, mechanism of stream-wise distribution of BDR is analysed based on the near-wall flow parameters. The local drag reduction is a direct result of near-wall max void fraction. And the near-wall velocity gradient modified by the presence of bubbles is considered as another important factor for bubble drag reduction.

  19. Automatic segmentation of left ventricle in cardiac cine MRI images based on deep learning

    NASA Astrophysics Data System (ADS)

    Zhou, Tian; Icke, Ilknur; Dogdas, Belma; Parimal, Sarayu; Sampath, Smita; Forbes, Joseph; Bagchi, Ansuman; Chin, Chih-Liang; Chen, Antong

    2017-02-01

    In developing treatment of cardiovascular diseases, short axis cine MRI has been used as a standard technique for understanding the global structural and functional characteristics of the heart, e.g. ventricle dimensions, stroke volume and ejection fraction. To conduct an accurate assessment, heart structures need to be segmented from the cine MRI images with high precision, which could be a laborious task when performed manually. Herein a fully automatic framework is proposed for the segmentation of the left ventricle from the slices of short axis cine MRI scans of porcine subjects using a deep learning approach. For training the deep learning models, which generally requires a large set of data, a public database of human cine MRI scans is used. Experiments on the 3150 cine slices of 7 porcine subjects have shown that when comparing the automatic and manual segmentations the mean slice-wise Dice coefficient is about 0.930, the point-to-curve error is 1.07 mm, and the mean slice-wise Hausdorff distance is around 3.70 mm, which demonstrates the accuracy and robustness of the proposed inter-species translational approach.

  20. Reduction from cost-sensitive ordinal ranking to weighted binary classification.

    PubMed

    Lin, Hsuan-Tien; Li, Ling

    2012-05-01

    We present a reduction framework from ordinal ranking to binary classification. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranker from the binary classifier. Based on the framework, we show that a weighted 0/1 loss of the binary classifier upper-bounds the mislabeling cost of the ranker, both error-wise and regret-wise. Our framework allows not only the design of good ordinal ranking algorithms based on well-tuned binary classification approaches, but also the derivation of new generalization bounds for ordinal ranking from known bounds for binary classification. In addition, our framework unifies many existing ordinal ranking algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms. In addition, the newly designed algorithms lead to better cost-sensitive ordinal ranking performance, as well as improved listwise ranking performance.

  1. A stimulus-dependent spike threshold is an optimal neural coder

    PubMed Central

    Jones, Douglas L.; Johnson, Erik C.; Ratnam, Rama

    2015-01-01

    A neural code based on sequences of spikes can consume a significant portion of the brain's energy budget. Thus, energy considerations would dictate that spiking activity be kept as low as possible. However, a high spike-rate improves the coding and representation of signals in spike trains, particularly in sensory systems. These are competing demands, and selective pressure has presumably worked to optimize coding by apportioning a minimum number of spikes so as to maximize coding fidelity. The mechanisms by which a neuron generates spikes while maintaining a fidelity criterion are not known. Here, we show that a signal-dependent neural threshold, similar to a dynamic or adapting threshold, optimizes the trade-off between spike generation (encoding) and fidelity (decoding). The threshold mimics a post-synaptic membrane (a low-pass filter) and serves as an internal decoder. Further, it sets the average firing rate (the energy constraint). The decoding process provides an internal copy of the coding error to the spike-generator which emits a spike when the error equals or exceeds a spike threshold. When optimized, the trade-off leads to a deterministic spike firing-rule that generates optimally timed spikes so as to maximize fidelity. The optimal coder is derived in closed-form in the limit of high spike-rates, when the signal can be approximated as a piece-wise constant signal. The predicted spike-times are close to those obtained experimentally in the primary electrosensory afferent neurons of weakly electric fish (Apteronotus leptorhynchus) and pyramidal neurons from the somatosensory cortex of the rat. We suggest that KCNQ/Kv7 channels (underlying the M-current) are good candidates for the decoder. They are widely coupled to metabolic processes and do not inactivate. We conclude that the neural threshold is optimized to generate an energy-efficient and high-fidelity neural code. PMID:26082710

  2. MRI-Based Intelligence Quotient (IQ) Estimation with Sparse Learning

    PubMed Central

    Wang, Liye; Wee, Chong-Yaw; Suk, Heung-Il; Tang, Xiaoying; Shen, Dinggang

    2015-01-01

    In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject’s IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge. PMID:25822851

  3. Connectomic disturbances in attention-deficit/hyperactivity disorder: a whole-brain tractography analysis.

    PubMed

    Hong, Soon-Beom; Zalesky, Andrew; Fornito, Alex; Park, Subin; Yang, Young-Hui; Park, Min-Hyeon; Song, In-Chan; Sohn, Chul-Ho; Shin, Min-Sup; Kim, Bung-Nyun; Cho, Soo-Churl; Han, Doug Hyun; Cheong, Jae Hoon; Kim, Jae-Won

    2014-10-15

    Few studies have sought to identify, in a regionally unbiased way, the precise cortical and subcortical regions that are affected by white matter abnormalities in attention-deficit/hyperactivity disorder (ADHD). This study aimed to derive a comprehensive, whole-brain characterization of connectomic disturbances in ADHD. Using diffusion tensor imaging, whole-brain tractography, and an imaging connectomics approach, we characterized altered white matter connectivity in 71 children and adolescents with ADHD compared with 26 healthy control subjects. White matter differences were further delineated between patients with (n = 40) and without (n = 26) the predominantly hyperactive/impulsive subtype of ADHD. A significant network comprising 25 distinct fiber bundles linking 23 different brain regions spanning frontal, striatal, and cerebellar brain regions showed altered white matter structure in ADHD patients (p < .05, family-wise error-corrected). Moreover, fractional anisotropy in some of these fiber bundles correlated with attentional disturbances. Attention-deficit/hyperactivity disorder subtypes were differentiated by a right-lateralized network (p < .05, family-wise error-corrected) predominantly linking frontal, cingulate, and supplementary motor areas. Fractional anisotropy in this network was also correlated with continuous performance test scores. Using an unbiased, whole-brain, data-driven approach, we demonstrated abnormal white matter connectivity in ADHD. The correlations observed with measures of attentional performance underscore the functional importance of these connectomic disturbances for the clinical phenotype of ADHD. A distributed pattern of white matter microstructural integrity separately involving frontal, striatal, and cerebellar brain regions, rather than direct frontostriatal connectivity, appears to be disrupted in children and adolescents with ADHD. Copyright © 2014 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  4. Does a single session of electroconvulsive therapy alter the neural response to emotional faces in depression? A randomised sham-controlled functional magnetic resonance imaging study.

    PubMed

    Miskowiak, Kamilla W; Kessing, Lars V; Ott, Caroline V; Macoveanu, Julian; Harmer, Catherine J; Jørgensen, Anders; Revsbech, Rasmus; Jensen, Hans M; Paulson, Olaf B; Siebner, Hartwig R; Jørgensen, Martin B

    2017-09-01

    Negative neurocognitive bias is a core feature of major depressive disorder that is reversed by pharmacological and psychological treatments. This double-blind functional magnetic resonance imaging study investigated for the first time whether electroconvulsive therapy modulates negative neurocognitive bias in major depressive disorder. Patients with major depressive disorder were randomised to one active ( n=15) or sham electroconvulsive therapy ( n=12). The following day they underwent whole-brain functional magnetic resonance imaging at 3T while viewing emotional faces and performed facial expression recognition and dot-probe tasks. A single electroconvulsive therapy session had no effect on amygdala response to emotional faces. Whole-brain analysis revealed no effects of electroconvulsive therapy versus sham therapy after family-wise error correction at the cluster level, using a cluster-forming threshold of Z>3.1 ( p<0.001) to secure family-wise error <5%. Groups showed no differences in behavioural measures, mood and medication. Exploratory cluster-corrected whole-brain analysis ( Z>2.3; p<0.01) revealed electroconvulsive therapy-induced changes in parahippocampal and superior frontal responses to fearful versus happy faces as well as in fear-specific functional connectivity between amygdala and occipito-temporal regions. Across all patients, greater fear-specific amygdala - occipital coupling correlated with lower fear vigilance. Despite no statistically significant shift in neural response to faces after a single electroconvulsive therapy session, the observed trend changes after a single electroconvulsive therapy session point to an early shift in emotional processing that may contribute to antidepressant effects of electroconvulsive therapy.

  5. The Storage Ring Proton EDM Experiment

    NASA Astrophysics Data System (ADS)

    Semertzidis, Yannis; Storage Ring Proton EDM Collaboration

    2014-09-01

    The storage ring pEDM experiment utilizes an all-electric storage ring to store ~1011 longitudinally polarized protons simultaneously in clock-wise and counter-clock-wise directions for 103 seconds. The radial E-field acts on the proton EDM for the duration of the storage time to precess its spin in the vertical plane. The ring lattice is optimized to reduce intra-beam scattering, increase the statistical sensitivity and reduce the systematic errors of the method. The main systematic error is a net radial B-field integrated around the ring causing an EDM-like vertical spin precession. The counter-rotating beams sense this integrated field and are vertically shifted by an amount, which depends on the strength of the vertical focusing in the ring, thus creating a radial B-field. Modulating the vertical focusing at 10 kHz makes possible the detection of this radial B-field by a SQUID-magnetometer (SQUID-based BPM). For a total number of n SQUID-based BPMs distributed around the ring the effectiveness of the method is limited to the N = n /2 harmonic of the background radial B-field due to the Nyquist sampling theorem limit. This limitation establishes the requirement to reduce the maximum radial B-field to 0.1-1 nT everywhere around the ring by layers of mu-metal and aluminum vacuum tube. The metho's sensitivity is 10-29 e .cm , more than three orders of magnitude better than the present neutron EDM experimental limit, making it sensitive to SUSY-like new physics mass scale up to 300 TeV.

  6. Microsatellite markers associated with resistance to Marek's disease in commercial layer chickens.

    PubMed

    McElroy, J P; Dekkers, J C M; Fulton, J E; O'Sullivan, N P; Soller, M; Lipkin, E; Zhang, W; Koehler, K J; Lamont, S J; Cheng, H H

    2005-11-01

    The objective of the current study was to identify QTL conferring resistance to Marek's disease (MD) in commercial layer chickens. To generate the resource population, 2 partially inbred lines that differed in MD-caused mortality were intermated to produce 5 backcross families. Vaccinated chicks were challenged with very virulent plus (vv+) MD virus strain 648A at 6 d and monitored for MD symptoms. A recent field isolate of the MD virus was used because the lines were resistant to commonly used older laboratory strains. Selective genotyping was employed using 81 microsatellites selected based on prior results with selective DNA pooling. Linear regression and Cox proportional hazard models were used to detect associations between marker genotypes and survival. Significance thresholds were validated by simulation. Seven and 6 markers were significant based on proportion of false positive and false discovery rate thresholds less than 0.2, respectively. Seventeen markers were associated with MD survival considering a comparison-wise error rate of 0.10, which is about twice the number expected by chance, indicating that at least some of the associations represent true effects. Thus, the present study shows that loci affecting MD resistance can be mapped in commercial layer lines. More comprehensive studies are under way to confirm and extend these results.

  7. Weighted mining of massive collections of [Formula: see text]-values by convex optimization.

    PubMed

    Dobriban, Edgar

    2018-06-01

    Researchers in data-rich disciplines-think of computational genomics and observational cosmology-often wish to mine large bodies of [Formula: see text]-values looking for significant effects, while controlling the false discovery rate or family-wise error rate. Increasingly, researchers also wish to prioritize certain hypotheses, for example, those thought to have larger effect sizes, by upweighting, and to impose constraints on the underlying mining, such as monotonicity along a certain sequence. We introduce Princessp , a principled method for performing weighted multiple testing by constrained convex optimization. Our method elegantly allows one to prioritize certain hypotheses through upweighting and to discount others through downweighting, while constraining the underlying weights involved in the mining process. When the [Formula: see text]-values derive from monotone likelihood ratio families such as the Gaussian means model, the new method allows exact solution of an important optimal weighting problem previously thought to be non-convex and computationally infeasible. Our method scales to massive data set sizes. We illustrate the applications of Princessp on a series of standard genomics data sets and offer comparisons with several previous 'standard' methods. Princessp offers both ease of operation and the ability to scale to extremely large problem sizes. The method is available as open-source software from github.com/dobriban/pvalue_weighting_matlab (accessed 11 October 2017).

  8. Statistical Significance for Hierarchical Clustering

    PubMed Central

    Kimes, Patrick K.; Liu, Yufeng; Hayes, D. Neil; Marron, J. S.

    2017-01-01

    Summary Cluster analysis has proved to be an invaluable tool for the exploratory and unsupervised analysis of high dimensional datasets. Among methods for clustering, hierarchical approaches have enjoyed substantial popularity in genomics and other fields for their ability to simultaneously uncover multiple layers of clustering structure. A critical and challenging question in cluster analysis is whether the identified clusters represent important underlying structure or are artifacts of natural sampling variation. Few approaches have been proposed for addressing this problem in the context of hierarchical clustering, for which the problem is further complicated by the natural tree structure of the partition, and the multiplicity of tests required to parse the layers of nested clusters. In this paper, we propose a Monte Carlo based approach for testing statistical significance in hierarchical clustering which addresses these issues. The approach is implemented as a sequential testing procedure guaranteeing control of the family-wise error rate. Theoretical justification is provided for our approach, and its power to detect true clustering structure is illustrated through several simulation studies and applications to two cancer gene expression datasets. PMID:28099990

  9. Weighted analysis of composite endpoints with simultaneous inference for flexible weight constraints.

    PubMed

    Duc, Anh Nguyen; Wolbers, Marcel

    2017-02-10

    Composite endpoints are widely used as primary endpoints of randomized controlled trials across clinical disciplines. A common critique of the conventional analysis of composite endpoints is that all disease events are weighted equally, whereas their clinical relevance may differ substantially. We address this by introducing a framework for the weighted analysis of composite endpoints and interpretable test statistics, which are applicable to both binary and time-to-event data. To cope with the difficulty of selecting an exact set of weights, we propose a method for constructing simultaneous confidence intervals and tests that asymptotically preserve the family-wise type I error in the strong sense across families of weights satisfying flexible inequality or order constraints based on the theory of χ¯2-distributions. We show that the method achieves the nominal simultaneous coverage rate with substantial efficiency gains over Scheffé's procedure in a simulation study and apply it to trials in cardiovascular disease and enteric fever. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  10. Performance Bounds on Two Concatenated, Interleaved Codes

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Dolinar, Samuel

    2010-01-01

    A method has been developed of computing bounds on the performance of a code comprised of two linear binary codes generated by two encoders serially concatenated through an interleaver. Originally intended for use in evaluating the performances of some codes proposed for deep-space communication links, the method can also be used in evaluating the performances of short-block-length codes in other applications. The method applies, more specifically, to a communication system in which following processes take place: At the transmitter, the original binary information that one seeks to transmit is first processed by an encoder into an outer code (Co) characterized by, among other things, a pair of numbers (n,k), where n (n > k)is the total number of code bits associated with k information bits and n k bits are used for correcting or at least detecting errors. Next, the outer code is processed through either a block or a convolutional interleaver. In the block interleaver, the words of the outer code are processed in blocks of I words. In the convolutional interleaver, the interleaving operation is performed bit-wise in N rows with delays that are multiples of B bits. The output of the interleaver is processed through a second encoder to obtain an inner code (Ci) characterized by (ni,ki). The output of the inner code is transmitted over an additive-white-Gaussian- noise channel characterized by a symbol signal-to-noise ratio (SNR) Es/No and a bit SNR Eb/No. At the receiver, an inner decoder generates estimates of bits. Depending on whether a block or a convolutional interleaver is used at the transmitter, the sequence of estimated bits is processed through a block or a convolutional de-interleaver, respectively, to obtain estimates of code words. Then the estimates of the code words are processed through an outer decoder, which generates estimates of the original information along with flags indicating which estimates are presumed to be correct and which are found to be erroneous. From the perspective of the present method, the topic of major interest is the performance of the communication system as quantified in the word-error rate and the undetected-error rate as functions of the SNRs and the total latency of the interleaver and inner code. The method is embodied in equations that describe bounds on these functions. Throughout the derivation of the equations that embody the method, it is assumed that the decoder for the outer code corrects any error pattern of t or fewer errors, detects any error pattern of s or fewer errors, may detect some error patterns of more than s errors, and does not correct any patterns of more than t errors. Because a mathematically complete description of the equations that embody the method and of the derivation of the equations would greatly exceed the space available for this article, it must suffice to summarize by reporting that the derivation includes consideration of several complex issues, including relationships between latency and memory requirements for block and convolutional codes, burst error statistics, enumeration of error-event intersections, and effects of different interleaving depths. In a demonstration, the method was used to calculate bounds on the performances of several communication systems, each based on serial concatenation of a (63,56) expurgated Hamming code with a convolutional inner code through a convolutional interleaver. The bounds calculated by use of the method were compared with results of numerical simulations of performances of the systems to show the regions where the bounds are tight (see figure).

  11. White matter lesions characterise brain involvement in moderate to severe chronic obstructive pulmonary disease, but cerebral atrophy does not.

    PubMed

    Spilling, Catherine A; Jones, Paul W; Dodd, James W; Barrick, Thomas R

    2017-06-19

    Brain pathology is relatively unexplored in chronic obstructive pulmonary disease (COPD). This study is a comprehensive investigation of grey matter (GM) and white matter (WM) changes and how these relate to disease severity and cognitive function. T1-weighted and fluid-attenuated inversion recovery images were acquired for 31 stable COPD patients (FEV 1 52.1% pred., PaO 2 10.1 kPa) and 24 age, gender-matched controls. T1-weighted images were segmented into GM, WM and cerebrospinal fluid (CSF) tissue classes using a semi-automated procedure optimised for use with this cohort. This procedure allows, cohort-specific anatomical features to be captured, white matter lesions (WMLs) to be identified and includes a tissue repair step to correct for misclassification caused by WMLs. Tissue volumes and cortical thickness were calculated from the resulting segmentations. Additionally, a fully-automated pipeline was used to calculate localised cortical surface and gyrification. WM and GM tissue volumes, the tissue volume ratio (indicator of atrophy), average cortical thickness, and the number, size, and volume of white matter lesions (WMLs) were analysed across the whole-brain and regionally - for each anatomical lobe and the deep-GM. The hippocampus was investigated as a region-of-interest. Localised (voxel-wise and vertex-wise) variations in cortical gyrification, GM density and cortical thickness, were also investigated. Statistical models controlling for age and gender were used to test for between-group differences and within-group correlations. Robust statistical approaches ensured the family-wise error rate was controlled in regional and local analyses. There were no significant differences in global, regional, or local measures of GM between patients and controls, however, patients had an increased volume (p = 0.02) and size (p = 0.04) of WMLs. In patients, greater normalised hippocampal volume positively correlated with exacerbation frequency (p = 0.04), and greater WML volume was associated with worse episodic memory (p = 0.05). A negative relationship between WML and FEV 1 % pred. approached significance (p = 0.06). There was no evidence of cerebral atrophy within this cohort of stable COPD patients, with moderate airflow obstruction. However, there were indications of WM damage consistent with an ischaemic pathology. It cannot be concluded whether this represents a specific COPD, or smoking-related, effect.

  12. A Hybrid Color Space for Skin Detection Using Genetic Algorithm Heuristic Search and Principal Component Analysis Technique

    PubMed Central

    2015-01-01

    Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications. PMID:26267377

  13. Data Wise in Action: Stories of Schools Using Data to Improve Teaching and Learning

    ERIC Educational Resources Information Center

    Boudett, Kathryn Parker, Ed.; Steele, Jennifer L., Ed.

    2007-01-01

    What does it look like when a school uses data wisely? "Data Wise in Action", a new companion and sequel to the bestselling "Data Wise", tells the stories of eight very different schools following the Data Wise process of using assessment results to improve teaching and learning. "Data Wise in Action" highlights the…

  14. WISE Design for Knowledge Integration.

    ERIC Educational Resources Information Center

    Linn, Marcia C.; Clark, Douglas; Slotta, James D.

    2003-01-01

    Examines the implementation of Web-based Inquiry Science Environment (WISE), which can incorporate modeling tools and hand-held devices. Describes WISE design team practices, features of the WISE learning environment, and patterns of feature use in WISE library projects. (SOE)

  15. Students' Perspective (Age Wise, Gender Wise and Year Wise) of Parameters Affecting the Undergraduate Engineering Education

    ERIC Educational Resources Information Center

    Kumari, Neeraj

    2014-01-01

    The objective of the study is to examine the students' perspective (age wise, gender wise and year wise) of parameters affecting the undergraduate engineering education system present in a private technical institution in NCR [National Capital Region], Haryana. It is a descriptive type of research in nature. The data has been collected with the…

  16. Disequilibrium dihedral angles in layered intrusions: the microstructural record of fractionation

    NASA Astrophysics Data System (ADS)

    Holness, Marian; Namur, Olivier; Cawthorn, Grant

    2013-04-01

    The dihedral angle formed at junctions between two plagioclase grains and a grain of augite is only rarely in textural equilibrium in gabbros from km-scale crustal layered intrusions. The median of a population of these disequilibrium angles, Θcpp, varies systematically within individual layered intrusions, remaining constant over large stretches of stratigraphy with significant increases or decreases associated with the addition or reduction respectively of the number of phases on the liquidus of the bulk magma. The step-wise changes in Θcpp are present in Upper Zone of the Bushveld Complex, the Megacyclic Unit I of the Sept Iles Intrusion, and the Layered Series of the Skaergaard Intrusion. The plagioclase-bearing cumulates of Rum have a bimodal distribution of Θcpp, dependent on whether the cumulus assemblage includes clinopyroxene. The presence of the step-wise changes is independent of the order of arrival of cumulus phases and of the composition of either the cumulus phases or the interstitial liquid inferred to be present in the crystal mush. Step-wise changes in the rate of change in enthalpy with temperature (ΔH) of the cooling and crystallizing magma correspond to the observed variation of Θcpp, with increases of both ΔH and Θcpp associated with the addition of another liquidus phase, and decreases of both associated with the removal of a liquidus phase. The replacement of one phase by another (e.g. olivine ⇔ orthpyroxene) has little effect on ΔH and no discernible effect on Θcpp. An increase of ΔH is manifest by an increase in the fraction of the total enthalpy budget that is the latent heat of crystallization (the fractional latent heat). It also results in an increase in the amount crystallized in each incremental temperature drop (the crystal productivity). An increased fractional latent heat and crystal productivity result in an increased rate of plagioclase growth compared to that of augite during the final stages of solidification, causing a step-wise increase in Θcpp. Step-wise changes in the geometry of three-grain junctions in fully solidified gabbros thus provide a clear microstructural marker for the progress of fractionation.

  17. Relationships between HI Gas Mass, Stellar Mass and Star Formation Rate of HICAT+WISE Galaxies

    NASA Astrophysics Data System (ADS)

    Parkash, Vaishali; Brown, Michael J. I.

    2018-01-01

    Galaxies grow via a combination of star formation and mergers. In this thesis, I have studied what drives star formation in nearby galaxies. Using archival WISE, Galex, 21-cm data and new IFU observations, I examine the HI content, Hα emission, stellar kinematics, and gas kinematics of three sub-classes of galaxies: spiral galaxies, shell galaxies and HI galaxies with unusually low star formation rates (SFR). In this dissertation talk, I will focus on the scaling relations between atomic (HI) gas, stellar mass and SFR of spiral galaxies. Star formation is fuelled by HI and molecular hydrogen, therefore we expect correlations between HI mass, stellar mass and SFR. However, the measured scaling relationships vary in the prior literature due to sample selection or low completeness. I will discuss new scaling relationships determined using HI Parkes All Sky-Survey Catalogue (HICAT) and the Wide-field Infrared Survey Explorer (WISE). The combination of the local HICAT survey with sensitive WISE mid-infrared imaging improves the stellar masses, SFRs and completeness relative to previous literature. Of the 3,513 HICAT sources, we find 3.4 μm counterparts for 2,824 sources (80%), and provide new WISE matched aperture photometry for these galaxies. For a stellar mass selected sample of z ≤ 0.01 spiral galaxies, we find HI detections for 94% of the galaxies, enabling us to accurately measure HI mass as a function of stellar mass. In contrast to HI-selected galaxy samples, we find that star formation efficiency of spiral galaxies is constant at 10-9.5 yr‑1 with a scatter of 0.5 dex for stellar masses above 109.5 solar masses. We find HI mass increases with stellar mass for spiral galaxies, but the scatter is 1.7 dex for all spiral galaxies and 0.6 dex for galaxies with the T-type 5 to 7. We find an upper limit on HI mass that depends on stellar mass, which is consistent with this limit being dictated by the halo spin parameter.

  18. Calibrating Star Formation in WISE Using Total Infrared Luminosity

    NASA Astrophysics Data System (ADS)

    Cluver, M. E.; Jarrett, T. H.; Dale, D. A.; Smith, J.-D. T.; August, Tamlyn; Brown, M. J. I.

    2017-11-01

    We present accurate resolved WISE photometry of galaxies in the combined SINGS and KINGFISH sample. The luminosities in the W3 12 μm and W4 23 μm bands are calibrated to star formation rates (SFRs) derived using the total infrared luminosity, avoiding UV/optical uncertainties due to dust extinction corrections. The W3 relation has a 1σ scatter of 0.15 dex that is over nearly 5 orders of magnitude in SFR and 12 μm luminosity, and a range in host stellar mass from dwarfs (107 {M}⊙ ) to ˜ 3× {M}{\\star } (1011.5 {M}⊙ ) galaxies. In the absence of deep silicate absorption features and powerful active galactic nuclei, we expect this to be a reliable SFR indicator chiefly due to the broad nature of the W3 band. By contrast, the W4 SFR relation shows more scatter (1σ =0.18 dex). Both relations show reasonable agreement with radio-continuum-derived SFRs and excellent accordance with so-called “hybrid” Hα + 24 μm and FUV+24 μm indicators. Moreover, the WISE SFR relations appear to be insensitive to the metallicity range in the sample. We also compare our results with IRAS-selected luminous infrared galaxies, showing that the WISE relations maintain concordance, but systematically deviate for the most extreme galaxies. Given the all-sky coverage of WISE and the performance of the W3 band as an SFR indicator, the {L}12μ {{m}} SFR relation could be of great use to studies of nearby galaxies and forthcoming large-area surveys at optical and radio wavelengths.

  19. Construction of the Second Quito Astrolabe Catalogue

    NASA Astrophysics Data System (ADS)

    Kolesnik, Y. B.

    1994-03-01

    A method for astrolabe catalogue construction is presented. It is based on classical concepts, but the model of conditional equations for the group reduction is modified, additional parameters being introduced in the step- wise regressions. The chain adjustment is neglected, and the advantages of this approach are discussed. The method has been applied to the data obtained with the astrolabe of the Quito Astronomical Observatory from 1964 to 1983. Various characteristics of the catalogue produced with this method are compared with those due to the rigorous classical method. Some improvement both in systematic and random errors is outlined.

  20. The Southern HII Region Discovery Survey

    NASA Astrophysics Data System (ADS)

    Wenger, Trey; Miller Dickey, John; Jordan, Christopher; Bania, Thomas M.; Balser, Dana S.; Dawson, Joanne; Anderson, Loren D.; Armentrout, William P.; McClure-Griffiths, Naomi

    2016-01-01

    HII regions are zones of ionized gas surrounding recently formed high-mass (OB-type) stars. They are among the brightest objects in the sky at radio wavelengths. HII regions provide a useful tool in constraining the Galactic morphological structure, chemical structure, and star formation rate. We describe the Southern HII Region Discovery Survey (SHRDS), an Australia Telescope Compact Array (ATCA) survey that discovered ~80 new HII regions (so far) in the Galactic longitude range 230 degrees to 360 degrees. This project is an extension of the Green Bank Telescope HII Region Discovery Survey (GBT HRDS), Arecibo HRDS, and GBT Widefield Infrared Survey Explorer (WISE) HRDS, which together discovered ~800 new HII regions in the Galactic longitude range -20 degrees to 270 degrees. Similar to those surveys, candidate HII regions were chosen from 20 micron emission (from WISE) coincident with 10 micron (WISE) and 20 cm (SGPS) emission. By using the ATCA to detect radio continuum and radio recombination line emission from a subset of these candidates, we have added to the population of known Galactic HII regions.

  1. Improved liver R2* mapping by pixel-wise curve fitting with adaptive neighborhood regularization.

    PubMed

    Wang, Changqing; Zhang, Xinyuan; Liu, Xiaoyun; He, Taigang; Chen, Wufan; Feng, Qianjin; Feng, Yanqiu

    2018-08-01

    To improve liver R2* mapping by incorporating adaptive neighborhood regularization into pixel-wise curve fitting. Magnetic resonance imaging R2* mapping remains challenging because of the serial images with low signal-to-noise ratio. In this study, we proposed to exploit the neighboring pixels as regularization terms and adaptively determine the regularization parameters according to the interpixel signal similarity. The proposed algorithm, called the pixel-wise curve fitting with adaptive neighborhood regularization (PCANR), was compared with the conventional nonlinear least squares (NLS) and nonlocal means filter-based NLS algorithms on simulated, phantom, and in vivo data. Visually, the PCANR algorithm generates R2* maps with significantly reduced noise and well-preserved tiny structures. Quantitatively, the PCANR algorithm produces R2* maps with lower root mean square errors at varying R2* values and signal-to-noise-ratio levels compared with the NLS and nonlocal means filter-based NLS algorithms. For the high R2* values under low signal-to-noise-ratio levels, the PCANR algorithm outperforms the NLS and nonlocal means filter-based NLS algorithms in the accuracy and precision, in terms of mean and standard deviation of R2* measurements in selected region of interests, respectively. The PCANR algorithm can reduce the effect of noise on liver R2* mapping, and the improved measurement precision will benefit the assessment of hepatic iron in clinical practice. Magn Reson Med 80:792-801, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  2. Disk Detective Follow-Up Program

    NASA Astrophysics Data System (ADS)

    Kuchner, Marc

    As new data on exoplanets and young stellar associations arrive, we will want to know: which of these planetary systems and young stars have circumstellar disks? The vast allsky database of 747 million infrared sources from NASA's Wide-field Infrared Survey Explorer (WISE) mission can supply answers. WISE is a discovery tool intended to find targets for JWST, sensitive enough to detect circumstellar disks as far away as 3000 light years. The vast WISE archive already serves us as a roadmap to guide exoplanet searches, provide information on disk properties as new planets are discovered, and teach us about the many hotly debated connections between disks and exoplanets. However, because of the challenges of utilizing the WISE data, this resource remains underutilized as a tool for disk and planet hunters. Attempts to use WISE to find disks around Kepler planet hosts were nearly scuttled by confusion noise. Moreover, since most of the stars with WISE infrared excesses were too red for Hipparcos photometry, most of the disks sensed by WISE remain obscure, orbiting stars unlisted in the usual star databases. To remedy the confusion noise problem, we have begun a massive project to scour the WISE data archive for new circumstellar disks. The Disk Detective project (Kuchner et al. 2016) engages layperson volunteers to examine images from WISE, NASA's Two Micron All-Sky Survey (2MASS) and optical surveys to search for new circumstellar disk candidates via the citizen science website DiskDetective.org. Fueled by the efforts of > 28,000 citizen scientists, Disk Detective is the largest survey for debris disks with WISE. It has already uncovered 4000 disk candidates worthy of follow-up. However, most host stars of the new Disk Detective disk candidates have no known spectral type or distance, especially those with red colors: K and M stars and Young Stellar Objects. Others require further observations to check for false positives. The Disk Detective project is supported by NASA ADAP funds, which are not allowed to fund a major observational follow-up campaign. So here we propose a campaign of follow-up observations that will turn the unique, growing catalog of Disk Detective disk candidates into a reliable, publically-available treasure trove of new data on nearby disks in time to complement the upcoming new catalogs of planet hosts and stellar moving groups. We will use automated adaptive optics (AO) instruments to image disk candidates and check them for contamination from background objects. We will correlate our discoveries with the vast Gaia and LAMOST surveys to study disks in associations with other young stars. We will follow up disk candidates spectroscopically to remove more false positives. We will search for cold dust around our disk candidates with the James Clerk Maxwell Telescope (JCMT) and analyze data from the Gemini Planet Imager (GPI) to image young, nearby disk candidates. This follow up work will realize the full potential of the WISE mission as a roadmap to future exoplanet discoveries. It will yield contamination rates that will be crucial for interpreting all disk searches done with WISE. Our search will yield 2000 well-vetted nearby disks, including 60 that the Gaia mission will likely find to contain giant planets. This crucial follow-up work should be done now to take full advantage of Gaia during JWST's planned lifetime.

  3. Injection Characteristics of Non-Swirling and Swirling Annular Liquid Sheets

    NASA Technical Reports Server (NTRS)

    Harper, Brent (Technical Monitor); Ibrahim, E. A.; McKinney, T. R.

    2004-01-01

    A simplified mathematical model, based on body-fitted coordinates, is formulated to study the evolution of non-swirling and swirling liquid sheet emanated from an annular nozzle in a quiescent surrounding medium. The model provides predictions of sheet trajectory, thickness and velocity at various liquid mass flow rates and liquid-swirler angles. It is found that a non-swirling annular sheet converges toward its centerline and assumes a bell shape as it moves downstream from the nozzle. The bell radius, and length are more pronounced at higher liquid mass flow rates. The thickness of the non-swirling annular sheet increases while its stream-wise velocity decreases with an increase in mass flow rate. The introduction of swirl results in the formation of a diverging hollow-cone sheet. The hollow-cone divergence from its centerline is enhanced by an increase in liquid mass flow rate or liquid-swirler angle. The hollow- cone sheet its radius, curvature and stream-wise velocity increase while its thickness and tangential velocity decrease as a result of increasing the mass flow rate or liquid-swirler angle. The present results are compared with previous studies and conclusions are drawn.

  4. Building Community and Fostering Success in STEM Through the Women in Science & Engineering (WiSE) Program at the University of Nevada, Reno

    NASA Astrophysics Data System (ADS)

    Langus, T. C.; Tempel, R. N.

    2017-12-01

    The Women in Science & Engineering (WiSE) program at the University of Nevada, Reno (UNR) aims to recruit and retain a diverse population of women in STEM fields. During the WiSE Program's 10 years in service, we have primarily functioned as a resource for 364 young women to expand their pre-professional network by building valuable relationships with like-minded women. More recently, we have introduced key changes to better benefit our WiSE scholars, establishing a new residence hall, the Living Learning Community (LLC). The introduction of the LLC, resident assistants, and academic mentors helped to provide support to a diverse culture of women with varying thoughts, values, attitudes, and identities. To evaluate the progress of our program, demographic data was statistically analyzed using SPSS to identify correlations between math preparation, performance in foundational courses, average time to graduation, and retention in STEM majors. Initial programmatic assessment indicates that students participating in WiSE are provided a more well-rounded experience while pursuing higher education. We have maintained a 90% retention rate of females graduating with bachelor's degrees in STEM disciplines (n=187), with many graduates completing advanced masters and doctoral degrees and seamlessly entering into post-graduate internships, professional, and industry careers. The success of the WiSE program is attributed to a focused initiative in fostering supportive classroom environments through common course enrollment, professional development, and engaging women in their community through service learning. As a continued focus, we aim to increase the inclusivity and representation of women at UNR in underrepresented fields such as physics, math, and the geosciences. Further program improvements will be based on ongoing research, including a qualitative approach to explore how providing gender equitable resources influences the persistence of women in STEM.

  5. Characterization of wise protein and its molecular mechanism to interact with both Wnt and BMP signals.

    PubMed

    Lintern, Katherine B; Guidato, Sonia; Rowe, Alison; Saldanha, José W; Itasaki, Nobue

    2009-08-21

    Cross-talk of BMP and Wnt signaling pathways has been implicated in many aspects of biological events during embryogenesis and in adulthood. A secreted protein Wise and its orthologs (Sostdc1, USAG-1, and Ectodin) have been shown to modulate Wnt signaling and also inhibit BMP signals. Modulation of Wnt signaling activity by Wise is brought about by an interaction with the Wnt co-receptor LRP6, whereas BMP inhibition is by binding to BMP ligands. Here we have investigated the mode of action of Wise on Wnt and BMP signals. It was found that Wise binds LRP6 through one of three loops formed by the cystine knot. The Wise deletion construct lacking the LRP6-interacting loop domain nevertheless binds BMP4 and inhibits BMP signals. Moreover, BMP4 does not interfere with Wise-LRP6 binding, suggesting separate domains for the physical interaction. Functional assays also show that the ability of Wise to block Wnt1 activity through LRP6 is not impeded by BMP4. In contrast, the ability of Wise to inhibit BMP4 is prevented by additional LRP6, implying a preference of Wise in binding LRP6 over BMP4. In addition to the interaction of Wise with BMP4 and LRP6, the molecular characteristics of Wise, such as glycosylation and association with heparan sulfate proteoglycans on the cell surface, are suggested. This study helps to understand the multiple functions of Wise at the molecular level and suggests a possible role for Wise in balancing Wnt and BMP signals.

  6. Implementation errors in the GingerALE Software: Description and recommendations.

    PubMed

    Eickhoff, Simon B; Laird, Angela R; Fox, P Mickle; Lancaster, Jack L; Fox, Peter T

    2017-01-01

    Neuroscience imaging is a burgeoning, highly sophisticated field the growth of which has been fostered by grant-funded, freely distributed software libraries that perform voxel-wise analyses in anatomically standardized three-dimensional space on multi-subject, whole-brain, primary datasets. Despite the ongoing advances made using these non-commercial computational tools, the replicability of individual studies is an acknowledged limitation. Coordinate-based meta-analysis offers a practical solution to this limitation and, consequently, plays an important role in filtering and consolidating the enormous corpus of functional and structural neuroimaging results reported in the peer-reviewed literature. In both primary data and meta-analytic neuroimaging analyses, correction for multiple comparisons is a complex but critical step for ensuring statistical rigor. Reports of errors in multiple-comparison corrections in primary-data analyses have recently appeared. Here, we report two such errors in GingerALE, a widely used, US National Institutes of Health (NIH)-funded, freely distributed software package for coordinate-based meta-analysis. These errors have given rise to published reports with more liberal statistical inferences than were specified by the authors. The intent of this technical report is threefold. First, we inform authors who used GingerALE of these errors so that they can take appropriate actions including re-analyses and corrective publications. Second, we seek to exemplify and promote an open approach to error management. Third, we discuss the implications of these and similar errors in a scientific environment dependent on third-party software. Hum Brain Mapp 38:7-11, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. A density functional theory study on peptide bond cleavage at aspartic residues: direct vs cyclic intermediate hydrolysis.

    PubMed

    Sang-aroon, Wichien; Amornkitbamrung, Vittaya; Ruangpornvisuti, Vithaya

    2013-12-01

    In this work, peptide bond cleavages at carboxy- and amino-sides of the aspartic residue in a peptide model via direct (concerted and step-wise) and cyclic intermediate hydrolysis reaction pathways were explored computationally. The energetics, thermodynamic properties, rate constants, and equilibrium constants of all hydrolysis reactions, as well as their energy profiles were computed at the B3LYP/6-311++G(d,p) level of theory. The result indicated that peptide bond cleavage of the Asp residue occurred most preferentially via the cyclic intermediate hydrolysis pathway. In all reaction pathways, cleavage of the peptide bond at the amino-side occurred less preferentially than at the carboxy-side. The overall reaction rate constants of peptide bond cleavage of the Asp residue at the carboxy-side for the assisted system were, in increasing order: concerted < step-wise < cyclic intermediate.

  8. Mid-infrared Variability of Changing-look AGNs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Zhenfeng; Wang, Tinggui; Jiang, Ning

    2017-09-01

    It is known that some active galactic nuclei (AGNs) transit from Type 1 to Type 2 or vice versa. There are two explanations for the so-called changing-look AGNs: one is the dramatic change of the obscuration along the line of sight, and the other is the variation of accretion rate. In this Letter, we report the detection of large amplitude variations in the mid-infrared luminosity during the transitions in 10 changing-look AGNs using the Wide-field Infrared Survey Explorer ( WISE ) and newly released Near-Earth Object WISE Reactivation data. The mid-infrared light curves of 10 objects echo the variability inmore » the optical band with a time lag expected for dust reprocessing. The large variability amplitude is inconsistent with the scenario of varying obscuration, rather it supports the scheme of dramatic change in the accretion rate.« less

  9. VizieR Online Data Catalog: Bright white dwarfs IRAC photometry (Barber+, 2016)

    NASA Astrophysics Data System (ADS)

    Barber, S. D.; Belardi, C.; Kilic, M.; Gianninas, A.

    2017-07-01

    Mid-infrared photometry, like the 3.4 and 4.6um photometry available from WISE, is necessary to detect emission from a debris disc orbiting a WD. WISE, however, has poor spatial resolution (6 arcsec beam size) and is known to have a 75 per cent false positive rate for detecting dusty discs around WDs fainter than 14.5(15) mag in W1(W2) (Barber et al. (2014ApJ...786...77B). To mitigate this high rate of spurious detections, we compile higher spatial resolution archival data from the InfraRed Array Camera (IRAC) on the Spitzer Space Telescope. We query the Spitzer Heritage Archive for any observations within 10 arcsec of the 1265 WDs from Gianninas et al. (2011, Cat. J/ApJ/743/138) and find 907 Astronomical Observing Requests (AORs) for 381 WDs. (1 data file).

  10. Wise, a context-dependent activator and inhibitor of Wnt signalling.

    PubMed

    Itasaki, Nobue; Jones, C Michael; Mercurio, Sara; Rowe, Alison; Domingos, Pedro M; Smith, James C; Krumlauf, Robb

    2003-09-01

    We have isolated a novel secreted molecule, Wise, by a functional screen for activities that alter the anteroposterior character of neuralised Xenopus animal caps. Wise encodes a secreted protein capable of inducing posterior neural markers at a distance. Phenotypes arising from ectopic expression or depletion of Wise resemble those obtained when Wnt signalling is altered. In animal cap assays, posterior neural markers can be induced by Wnt family members, and induction of these markers by Wise requires components of the canonical Wnt pathway. This indicates that in this context Wise activates the Wnt signalling cascade by mimicking some of the effects of Wnt ligands. Activation of the pathway was further confirmed by nuclear accumulation of beta-catenin driven by Wise. By contrast, in an assay for secondary axis induction, extracellularly Wise antagonises the axis-inducing ability of Wnt8. Thus, Wise can activate or inhibit Wnt signalling in a context-dependent manner. The Wise protein physically interacts with the Wnt co-receptor, lipoprotein receptor-related protein 6 (LRP6), and is able to compete with Wnt8 for binding to LRP6. These activities of Wise provide a new mechanism for integrating inputs through the Wnt coreceptor complex to modulate the balance of Wnt signalling.

  11. Explanatory Supplement to the AllWISE Data Release Products

    NASA Astrophysics Data System (ADS)

    Cutri, R. M.; Wright, E. L.; Conrow, T.; Fowler, J. W.; Eisenhardt, P. R. M.; Grillmair, C.; Kirkpatrick, J. D.; Masci, F.; McCallon, H. L.; Wheelock, S. L.; Fajardo-Acosta, S.; Yan, L.; Benford, D.; Harbut, M.; Jarrett, T.; Lake, S.; Leisawitz, D.; Ressler, M. E.; Stanford, S. A.; Tsai, C. W.; Liu, F.; Helou, G.; Mainzer, A.; Gettings, D.; Gonzalez, A.; Hoffman, D.; Marsh, K. A.; Padgett, D.; Skrutskie, M. F.; Beck, R. P.; Papin, M.; Wittman, M.

    2013-11-01

    The AllWISE program builds upon the successful Wide-field Infrared Survey Explorer (WISE; Wright et al. 2010) mission by combining data from all WISE and NEOWISE (Mainzer et al. 2011) survey phases to form the most comprehensive view of the mid-infrared sky currently available. By combining the data from two complete sky coverage epochs in an advanced data processing system, AllWISE has generated new products that have enhanced photometric sensitivity and accuracy, and improved astrometric precision compared with the earlier WISE All-Sky Data Release. Exploiting the 6 month baseline between the WISE sky coverage epochs enables AllWISE to measure source motions for the first time, and to compute improved flux variability statistics. AllWISE data release products include: a Source Catalog that contains 4-band fluxes, positions, apparent motion measurements, and flux variability statistics for over 747 million objects detected at SNR>5 in the combined exposures; a Multiepoch Photometry Database containing over 42 billion time-tagged, single-exposure fluxes for each object detected on the combined exposures; and an Image Atlas of 18,240 4-band calibrated FITS images, depth-of-coverage and noise maps that cover the sky produced by coadding nearly 7.9 million single-exposure images from the cryogenic and post-cryogenic survey phases. The Explanatory Supplement to the AllWISE Data Release Products is a general guide for users of the AllWISE data. The Supplement contains detailed descriptions of the format and characteristics of the AllWISE data products, as well as a summary of cautionary notes that describe known limitations. The Supplement is an on-line document that is updated frequently to provide the most current information for users of the AllWISE data products. The Explanatory Supplement is maintained at: http://wise2.ipac.caltech.edu/docs/release/allwise/expsup/index.html AllWISE makes use of data from WISE, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, and NEOWISE, which is a project of the Jet Propulsion Laboratory/California Institute of Technology. WISE and NEOWISE are funded by the National Aeronautics and Space Administration.

  12. Adaptive filtering of GOCE-derived gravity gradients of the disturbing potential in the context of the space-wise approach

    NASA Astrophysics Data System (ADS)

    Piretzidis, Dimitrios; Sideris, Michael G.

    2017-09-01

    Filtering and signal processing techniques have been widely used in the processing of satellite gravity observations to reduce measurement noise and correlation errors. The parameters and types of filters used depend on the statistical and spectral properties of the signal under investigation. Filtering is usually applied in a non-real-time environment. The present work focuses on the implementation of an adaptive filtering technique to process satellite gravity gradiometry data for gravity field modeling. Adaptive filtering algorithms are commonly used in communication systems, noise and echo cancellation, and biomedical applications. Two independent studies have been performed to introduce adaptive signal processing techniques and test the performance of the least mean-squared (LMS) adaptive algorithm for filtering satellite measurements obtained by the gravity field and steady-state ocean circulation explorer (GOCE) mission. In the first study, a Monte Carlo simulation is performed in order to gain insights about the implementation of the LMS algorithm on data with spectral behavior close to that of real GOCE data. In the second study, the LMS algorithm is implemented on real GOCE data. Experiments are also performed to determine suitable filtering parameters. Only the four accurate components of the full GOCE gravity gradient tensor of the disturbing potential are used. The characteristics of the filtered gravity gradients are examined in the time and spectral domain. The obtained filtered GOCE gravity gradients show an agreement of 63-84 mEötvös (depending on the gravity gradient component), in terms of RMS error, when compared to the gravity gradients derived from the EGM2008 geopotential model. Spectral-domain analysis of the filtered gradients shows that the adaptive filters slightly suppress frequencies in the bandwidth of approximately 10-30 mHz. The limitations of the adaptive LMS algorithm are also discussed. The tested filtering algorithm can be connected to and employed in the first computational steps of the space-wise approach, where a time-wise Wiener filter is applied at the first stage of GOCE gravity gradient filtering. The results of this work can be extended to using other adaptive filtering algorithms, such as the recursive least-squares and recursive least-squares lattice filters.

  13. A large-area, spatially continuous assessment of land cover map error and its impact on downstream analyses.

    PubMed

    Estes, Lyndon; Chen, Peng; Debats, Stephanie; Evans, Tom; Ferreira, Stefanus; Kuemmerle, Tobias; Ragazzo, Gabrielle; Sheffield, Justin; Wolf, Adam; Wood, Eric; Caylor, Kelly

    2018-01-01

    Land cover maps increasingly underlie research into socioeconomic and environmental patterns and processes, including global change. It is known that map errors impact our understanding of these phenomena, but quantifying these impacts is difficult because many areas lack adequate reference data. We used a highly accurate, high-resolution map of South African cropland to assess (1) the magnitude of error in several current generation land cover maps, and (2) how these errors propagate in downstream studies. We first quantified pixel-wise errors in the cropland classes of four widely used land cover maps at resolutions ranging from 1 to 100 km, and then calculated errors in several representative "downstream" (map-based) analyses, including assessments of vegetative carbon stocks, evapotranspiration, crop production, and household food security. We also evaluated maps' spatial accuracy based on how precisely they could be used to locate specific landscape features. We found that cropland maps can have substantial biases and poor accuracy at all resolutions (e.g., at 1 km resolution, up to ∼45% underestimates of cropland (bias) and nearly 50% mean absolute error (MAE, describing accuracy); at 100 km, up to 15% underestimates and nearly 20% MAE). National-scale maps derived from higher-resolution imagery were most accurate, followed by multi-map fusion products. Constraining mapped values to match survey statistics may be effective at minimizing bias (provided the statistics are accurate). Errors in downstream analyses could be substantially amplified or muted, depending on the values ascribed to cropland-adjacent covers (e.g., with forest as adjacent cover, carbon map error was 200%-500% greater than in input cropland maps, but ∼40% less for sparse cover types). The average locational error was 6 km (600%). These findings provide deeper insight into the causes and potential consequences of land cover map error, and suggest several recommendations for land cover map users. © 2017 John Wiley & Sons Ltd.

  14. Controlled loading of cryoprotectants (CPAs) to oocyte with linear and complex CPA profiles on a microfluidic platform.

    PubMed

    Heo, Yun Seok; Lee, Ho-Joon; Hassell, Bryan A; Irimia, Daniel; Toth, Thomas L; Elmoazzen, Heidi; Toner, Mehmet

    2011-10-21

    Oocyte cryopreservation has become an essential tool in the treatment of infertility by preserving oocytes for women undergoing chemotherapy. However, despite recent advances, pregnancy rates from all cryopreserved oocytes remain low. The inevitable use of the cryoprotectants (CPAs) during preservation affects the viability of the preserved oocytes and pregnancy rates either through CPA toxicity or osmotic injury. Current protocols attempt to reduce CPA toxicity by minimizing CPA concentrations, or by minimizing the volume changes via the step-wise addition of CPAs to the cells. Although the step-wise addition decreases osmotic shock to oocytes, it unfortunately increases toxic injuries due to the long exposure times to CPAs. To address limitations of current protocols and to rationally design protocols that minimize the exposure to CPAs, we developed a microfluidic device for the quantitative measurements of oocyte volume during various CPA loading protocols. We spatially secured a single oocyte on the microfluidic device, created precisely controlled continuous CPA profiles (step-wise, linear and complex) for the addition of CPAs to the oocyte and measured the oocyte volumetric response to each profile. With both linear and complex profiles, we were able to load 1.5 M propanediol to oocytes in less than 15 min and with a volumetric change of less than 10%. Thus, we believe this single oocyte analysis technology will eventually help future advances in assisted reproductive technologies and fertility preservation.

  15. Searching For Infrared Excesses Around White Dwarf Stars

    NASA Astrophysics Data System (ADS)

    Deeb Wilson, Elin; Rebull, Luisa M.; Debes, John H.; Stark, Chris

    2017-01-01

    Many WDs have been found to be “polluted,” meaning they contain heavier elements in their atmospheres. Either an active process that counters gravitational settling is taking place, or an external mechanism is the cause. One proposed external mechanism for atmospheric pollution of WDs is the disintegration and accretion of rocky bodies, which would result in a circumstellar (CS) disk. As CS disks are heated, they emit excess infrared (IR) emission. WDs with IR excesses indicative of a CS disk are known as dusty WDs. Statistical studies are still needed to determine how numerous dusty, polluted WDs are, along with trends and correlations regarding rate of planetary accretion, the lifetimes of CS disks, and the structure and evolution of CS disks. These findings will allow for a better understanding of the fates of planets along with potential habitability of surviving planets.In this work, we are trying to confirm IR excesses around a sample of 69 WD stars selected as part of the WISE InfraRed Excesses around Degenerates (WIRED) Survey (Debes et al. 2011). We have archival data from WISE, Spitzer, 2MASS, DENIS, and SDSS. The targets were initially selected from the Sloan Digital Sky Survey (SDSS), and identified as containing IR excesses based on WISE data. We also have data from the Four Star Infrared Camera array, which is part of Carnegie Institution’s Magellan 6.5 meter Baade Telescope located at Las Campanas Observatory in Chile. These Four Star data are much higher spatial resolution than the WISE data that were used to determine if each WD has an IR excess. There are often not many bands delineating the IR excess portion of the SED; therefore, we are using the Four Star data to check if there is another source in the WISE beam affecting the IR excess.

  16. Student goal orientation in learning inquiry skills with modifiable software advisors

    NASA Astrophysics Data System (ADS)

    Shimoda, Todd A.; White, Barbara Y.; Frederiksen, John R.

    2002-03-01

    A computer support environment (SCI-WISE) for learning and doing science inquiry projects was designed. SCI-WISE incorporates software advisors that give general advice about a skill such as hypothesizing. By giving general advice (rather than step-by-step procedures), the system is intended to help students conduct experiments that are more epistemologically authentic. Also, students using SCI-WISE can select the type of advice the advisors give and when they give advice, as well as modify the advisors' knowledge bases. The system is based partly on a theoretical framework of levels of agency and goal orientation. This framework assumes that giving students higher levels of agency facilitates higher-level goal orientations (such as mastery or knowledge building as opposed to task completion) that in turn produce higher levels of competence. A study of sixth grade science students was conducted. Students took a pretest questionnaire that measured their goal orientations for science projects and their inquiry skills. The students worked in pairs on an open-ended inquiry project that requires complex reasoning about human memory. The students used one of two versions of SCI-WISE - one that was modifiable and one that was not. After finishing the project, the students took a posttest questionnaire similar to the pretest, and evaluated the version of the system they used. The main results showed that (a) there was no correlation of goal orientation with grade point average, (b) knowledge-oriented students using the modifiable version tended to rate SCI-WISE more helpful than task-oriented students, and (c) knowledge-oriented pairs using the nonmodifiable version tended to have higher posttest inquiry skills scores than other pair types.

  17. IRAC Photometry of the Coldest CatWISE-selected Brown Dwarfs

    NASA Astrophysics Data System (ADS)

    Meisner, Aaron; Kirkpatrick, J. Davy; Kirkpatrick, J. Davy; Eisenhardt, Peter; Marocco, Federico; Faherty, Jacqueline; Cushing, Michael; Wright, Edward

    2018-05-01

    We will obtain IRAC [3.6] and [4.5] photometry of 250 extremely cool brown dwarfs newly revealed by the powerful combination of WISE and NEOWISE imaging at 4.6 microns. Our CatWISE effort, which is an archival data analysis program using WISE and NEOWISE data, will improve upon the motion selection of AllWISE by enabling a >10x time baseline enhancement, from 0.5 years (AllWISE) to 6.5 years (CatWISE). As a result, CatWISE motion selection is expected to yield a dramatic 8-fold increase in the sample of known brown dwarfs at spectral types T5 and later (T < 1,200 K). Many of the coolest such CatWISE discoveries will be detected exclusively in the WISE 4.6 micron (W2) channel. WISE W1 (3.4 micron) nondetections, which we expect for the majority of our most interesting sources, will provide only limits on mid-infrared color. Spitzer can supply this critical datum by measuring accurate [3.6]-[4.5] colors of our discoveries. These Spitzer color measurements will permit photometric spectral type estimates, which in turn yield estimates for critical parameters including luminosity, distance, and near-infrared flux. Using large [3.6]-[4.5] color to pinpoint the coldest late T and Y dwarfs among our CatWISE sample will enable us to prioritize these objects for spectroscopic follow-up, better understand the bottom of the substellar mass function, and identify nearby giant planet analogs suitable for future atmospheric studies with JWST.

  18. WISE Photometry for 400 million SDSS sources

    DOE PAGES

    Lang, Dustin; Hogg, David W.; Schlegel, David J.

    2016-01-28

    Here, we present photometry of images from the Wide-Field Infrared Survey Explorer (WISE) of over 400 million sources detected by the Sloan Digital Sky Survey (SDSS). We also use a "forced photometry" technique, using measured SDSS source positions, star-galaxy classification, and galaxy profiles to define the sources whose fluxes are to be measured in the WISE images. We perform photometry with The Tractor image modeling code, working on our "unWISE" coaddds and taking account of the WISE point-spread function and a noise model. The result is a measurement of the flux of each SDSS source in each WISE band. Manymore » sources have little flux in the WISE bands, so often the measurements we report are consistent with zero given our uncertainties. But, for many sources we get 3σ or 4σ measurements; these sources would not be reported by the "official" WISE pipeline and will not appear in the WISE catalog, yet they can be highly informative for some scientific questions. In addition, these small-signal measurements can be used in stacking analyses at the catalog level. The forced photometry approach has the advantage that we measure a consistent set of sources between SDSS and WISE, taking advantage of the resolution and depth of the SDSS images to interpret the WISE images; objects that are resolved in SDSS but blended together in WISE still have accurate measurements in our photometry. Our results, and the code used to produce them, are publicly available at http://unwise.me.« less

  19. VizieR Online Data Catalog: Frequency of snowline-region planets (Shvartzvald+, 2016)

    NASA Astrophysics Data System (ADS)

    Shvartzvald, Y.; Maoz, D.; Udalski, A.; Sumi, T.; Friedmann, M.; Kaspi, S.; Poleski, R.; Szymanski, M. K.; Skowron, J.; Kozlowski, S.; Wyrzykowski, L.; Mroz, P.; Pietrukowicz, P.; Pietrzynski, G.; Soszynski, I.; Ulaczyk, K.; Abe, F.; Barry, R. K.; Bennett, D. P.; Bhattacharya, A.; Bond, I. A.; Freeman, M.; Inayama, K.; Itow, Y.; Koshimoto, N.; Ling, C. H.; Masuda, K.; Fukui, A.; Matsubara, Y.; Muraki, Y.; Ohnishi, K.; Rattenbury, N. J.; Saito, T.; Sullivan, D. J.; Suzuki, D.; Tristram, P. J.; Wakiyama, Y.; Yonehara, A.

    2017-06-01

    Our genII survey network is a collaboration between three groups: OGLE, MOA, and Wise. The OGLE and MOA groups regularly monitor a large region of the Galactic bulge, and routinely identify and monitor microlensing events. The Wise group monitors a field of 8 deg2, within the observational footprints of both OGLE and MOA, having the highest event rates based on previous years' observations (see Shvartzvald & Maoz, 2012MNRAS.419.3631S). The sample of microlensing events analysed here consists of 224 events from the 2011-2014 bulge seasons, observed by all three groups, and with each group having data near the peak of the event. (1 data file).

  20. Wise Detections of Known QSOS at Redshifts Greater Than Six

    NASA Technical Reports Server (NTRS)

    Blain, Andrew W.; Assef, Roberto; Stern, Daniel; Tsai, Chao-Wei; Eisenhardt, Peter; Bridge, Carrie; Benford, Dominic; Jarrett, Tom; Cutri, Roc; Petty, Sara; hide

    2013-01-01

    We present WISE All-Sky mid-infrared (IR) survey detections of 55 % (17/31) of the known QSOs at z greater than 6 from a range of surveys: the SDSS, the CFHT-LS, FIRST, Spitzer and UK1DSS. The WISE catalog thus provides a substantial increase in tiie quantity of IR data available for these sources: 17 are detected in the WISE Wl (3.4 micrometer) band, 16 in W2 (4.6 micrometers), 3 in W3 (12 micrometers) and 0 in W4 (22micrometers). This is particularly important with Spitzer in its warm-mission phase and no faint follow-up capability at wavelengths longwards of 5 micrometers until the launch of JWST. WISE thus provides a useful tool for understanding QSOs found in forthcoming large-area optical/IR sky surveys, using PanSTARRS, SkyMapper, VISTA, DES and LSST. The rest-UV properties of the WISE-detected and the WISE-non-detected samples differ: the detections have brighter i/z-band magnitudes and redder rest-UV colors. This suggests thai a more aggressive hunt for very-high-redshift QSOs, by combining WISE Wl and W2 data with red observed optical colors could be effective at least, for a subset of dusty candidate QSOs. Stacking the WISE images of the WISE-non-detected QSOs indicates that they are on average significantly fainter than the WISE-detccted examples, and are thus not narrowly missing detection in the WISE catalog. The WISE-catalog detection of three of our sample in the W3 band indicates that their mid-ID flux can be detected individually, although there is no stacked W3 detection of sources detected in Wl but not. W3. Stacking analyses of WISE data for large AGN samples will be a useful tool, and high-redshifl. QSOs of all types will be easy targets for JWST.

  1. Wise retained in the endoplasmic reticulum inhibits Wnt signaling by reducing cell surface LRP6.

    PubMed

    Guidato, Sonia; Itasaki, Nobue

    2007-10-15

    The Wnt signaling pathway is tightly regulated by extracellular and intracellular modulators. Wise was isolated as a secreted protein capable of interacting with the Wnt co-receptor LRP6. Studies in Xenopus embryos revealed that Wise either enhances or inhibits the Wnt pathway depending on the cellular context. Here we show that the cellular localization of Wise has distinct effects on the Wnt pathway readout. While secreted Wise either synergizes or inhibits the Wnt signals depending on the partner ligand, ER-retained Wise consistently blocks the Wnt pathway. ER-retained Wise reduces LRP6 on the cell surface, making cells less susceptible to the Wnt signal. This study provides a cellular mechanism for the action of Wise and introduces the modulation of cellular susceptibility to Wnt signals as a novel mechanism of the regulation of the Wnt pathway.

  2. Learning-based subject-specific estimation of dynamic maps of cortical morphology at missing time points in longitudinal infant studies.

    PubMed

    Meng, Yu; Li, Gang; Gao, Yaozong; Lin, Weili; Shen, Dinggang

    2016-11-01

    Longitudinal neuroimaging analysis of the dynamic brain development in infants has received increasing attention recently. Many studies expect a complete longitudinal dataset in order to accurately chart the brain developmental trajectories. However, in practice, a large portion of subjects in longitudinal studies often have missing data at certain time points, due to various reasons such as the absence of scan or poor image quality. To make better use of these incomplete longitudinal data, in this paper, we propose a novel machine learning-based method to estimate the subject-specific, vertex-wise cortical morphological attributes at the missing time points in longitudinal infant studies. Specifically, we develop a customized regression forest, named dynamically assembled regression forest (DARF), as the core regression tool. DARF ensures the spatial smoothness of the estimated maps for vertex-wise cortical morphological attributes and also greatly reduces the computational cost. By employing a pairwise estimation followed by a joint refinement, our method is able to fully exploit the available information from both subjects with complete scans and subjects with missing scans for estimation of the missing cortical attribute maps. The proposed method has been applied to estimating the dynamic cortical thickness maps at missing time points in an incomplete longitudinal infant dataset, which includes 31 healthy infant subjects, each having up to five time points in the first postnatal year. The experimental results indicate that our proposed framework can accurately estimate the subject-specific vertex-wise cortical thickness maps at missing time points, with the average error less than 0.23 mm. Hum Brain Mapp 37:4129-4147, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casagrande, L.; Asplund, M.; Ramirez, I.

    Solar infrared colors provide powerful constraints on the stellar effective temperature scale, but they must be measured with both accuracy and precision in order to do so. We fulfill this requirement by using line-depth ratios to derive in a model-independent way the infrared colors of the Sun, and we use the latter to test the zero point of the Casagrande et al. effective temperature scale, confirming its accuracy. Solar colors in the widely used Two Micron All Sky Survey (2MASS) JHK{sub s} and WISE W1-4 systems are provided: (V - J){sub Sun} = 1.198, (V - H){sub Sun} = 1.484,more » (V - K{sub s} ){sub Sun} = 1.560, (J - H){sub Sun} = 0.286, (J - K{sub s} ){sub Sun} = 0.362, (H - K{sub s} ){sub Sun} = 0.076, (V - W1){sub Sun} = 1.608, (V - W2){sub Sun} = 1.563, (V - W3){sub Sun} = 1.552, and (V - W4){sub Sun} = 1.604. A cross-check of the effective temperatures derived implementing 2MASS or WISE magnitudes in the infrared flux method confirms that the absolute calibration of the two systems agrees within the errors, possibly suggesting a 1% offset between the two, thus validating extant near- and mid-infrared absolute calibrations. While 2MASS magnitudes are usually well suited to derive T{sub eff}, we find that a number of bright, solar-like stars exhibit anomalous WISE colors. In most cases, this effect is spurious and can be attributed to lower-quality measurements, although for a couple of objects (3% {+-} 2% of the total sample) it might be real, and may hint at the presence of warm/hot debris disks.« less

  4. Galaxy and mass assembly (GAMA): the consistency of GAMA and WISE derived mass-to-light ratios

    NASA Astrophysics Data System (ADS)

    Kettlety, T.; Hesling, J.; Phillipps, S.; Bremer, M. N.; Cluver, M. E.; Taylor, E. N.; Bland-Hawthorn, J.; Brough, S.; De Propris, R.; Driver, S. P.; Holwerda, B. W.; Kelvin, L. S.; Sutherland, W.; Wright, A. H.

    2018-01-01

    Recent work has suggested that mid-IR wavelengths are optimal for estimating the mass-to-light ratios of stellar populations and hence the stellar masses of galaxies. We compare stellar masses deduced from spectral energy distribution (SED) models, fitted to multiwavelength optical-NIR photometry, to luminosities derived from WISE photometry in the W1 and W2 bands at 3.6 and 4.5 μm for non-star forming galaxies. The SED-derived masses for a carefully selected sample of low-redshift (z ≤ 0.15) passive galaxies agree with the prediction from stellar population synthesis models such that M*/LW1 ≃ 0.6 for all such galaxies, independent of other stellar population parameters. The small scatter between masses predicted from the optical SED and from the WISE measurements implies that random errors (as opposed to systematic ones such as the use of different initial mass functions) are smaller than previous, deliberately conservative, estimates for the SED fits. This test is subtly different from simultaneously fitting at a wide range of optical and mid-IR wavelengths, which may just generate a compromised fit: we are directly checking that the best-fitting model to the optical data generates an SED whose M*/LW1 is also consistent with separate mid-IR data. We confirm that for passive low-redshift galaxies a fixed M*/LW1 = 0.65 can generate masses at least as accurate as those obtained from more complex methods. Going beyond the mean value, in agreement with expectations from the models, we see a modest change in M*/LW1 with SED fitted stellar population age but an insignificant one with metallicity.

  5. A method for digital image registration using a mathematical programming technique

    NASA Technical Reports Server (NTRS)

    Yao, S. S.

    1973-01-01

    A new algorithm based on a nonlinear programming technique to correct the geometrical distortions of one digital image with respect to another is discussed. This algorithm promises to be superior to existing ones in that it is capable of treating localized differential scaling, translational and rotational errors over the whole image plane. A series of piece-wise 'rubber-sheet' approximations are used, constrained in such a manner that a smooth approximation over the entire image can be obtained. The theoretical derivation is included. The result of using the algorithm to register four channel S065 Apollo IX digitized photography over Imperial Valley, California, is discussed in detail.

  6. Correcting for sequencing error in maximum likelihood phylogeny inference.

    PubMed

    Kuhner, Mary K; McGill, James

    2014-11-04

    Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.

  7. A weighted adjustment of a similarity transformation between two point sets containing errors

    NASA Astrophysics Data System (ADS)

    Marx, C.

    2017-10-01

    For an adjustment of a similarity transformation, it is often appropriate to consider that both the source and the target coordinates of the transformation are affected by errors. For the least squares adjustment of this problem, a direct solution is possible in the cases of specific-weighing schemas of the coordinates. Such a problem is considered in the present contribution and a direct solution is generally derived for the m-dimensional space. The applied weighing schema allows (fully populated) point-wise weight matrices for the source and target coordinates, both weight matrices have to be proportional to each other. Additionally, the solutions of two borderline cases of this weighting schema are derived, which only consider errors in the source or target coordinates. The investigated solution of the rotation matrix of the adjustment is independent of the scaling between the weight matrices of the source and the target coordinates. The mentioned borderline cases, therefore, have the same solution of the rotation matrix. The direct solution method is successfully tested on an example of a 3D similarity transformation using a comparison with an iterative solution based on the Gauß-Helmert model.

  8. Spacewatch Survey for Asteroids and Comets

    DTIC Science & Technology

    2005-11-01

    radar images. Relationship of Spacewatch to the WISE spacecraft mission: E. L. Wright of the UCLA Astronomy Dept. is the PI of the Wide-field Infrared ...Survey Explorer (WISE) MIDEX spacecraft mission. WISE will map the whole sky at thermal infrared wavelengths with 500 times more sensitivity than the...elongations. WISE=s detections in the thermal infrared will also provide a size-limited sample of asteroids instead of the brightness-limited surveys

  9. These Strategies Soothe the Sting of Teacher Evaluation.

    ERIC Educational Resources Information Center

    Alkire, Phil

    1990-01-01

    When conducting teacher evaluations, the wise principal acts within union contracts and board policies, asks teachers for self-evaluations, carefully plans classroom visits, observes correctly, takes accurate notes, considers videotaping teachers, deemphasizes ratings, makes postevaluation conferences meaningful, and offers teachers a chance for…

  10. DIMENSIONS OF TEACHER'S ATTITUDES TOWARD INSTRUCTIONAL MEDIA.

    ERIC Educational Resources Information Center

    TOBIAS, SIGMUND

    TEACHERS' RATINGS ON SIX 7-POINT SEMANTIC DIFFERENTIAL SCALES (GOOD-BAD, WORTHLESS-VALUABLE, FAIR-UNFAIR, MEANINGLESS-MEANINGFUL, WISE-FOOLISH, DISREPUTABLE-REPUTABLE) WERE OBTAINED FOR THE FOLLOWING TERMS--AUTOMATED INSTRUCTION, SELF-INSTRUCTIONAL PROGRAM, TEACHING MACHINE, MECHANIZED TUTOR, PROGRAMED TEST, PROGRAMED INSTRUCTION, TUTOR TEXT, WORK…

  11. Galaxy Packs Big Star-Making Punch

    NASA Image and Video Library

    2013-04-23

    The tiny red spot in this image is one of the most efficient star-making galaxies ever observed, converting gas into stars at the maximum possible rate. The galaxy is shown here is from NASA WISE, which first spotted the rare galaxy in infrared light.

  12. Corticolimbic hyper-response to emotion and glutamatergic function in people with high schizotypy: a multimodal fMRI-MRS study

    PubMed Central

    Modinos, G; McLaughlin, A; Egerton, A; McMullen, K; Kumari, V; Barker, G J; Keysers, C; Williams, S C R

    2017-01-01

    Animal models and human neuroimaging studies suggest that altered levels of glutamatergic metabolites within a corticolimbic circuit have a major role in the pathophysiology of schizophrenia. Rodent models propose that prefrontal glutamate dysfunction could lead to amygdala hyper-response to environmental stress and underlie hippocampal overdrive in schizophrenia. Here we determine whether changes in brain glutamate are present in individuals with high schizotypy (HS), which refers to the presence of schizophrenia-like characteristics in healthy individuals, and whether glutamate levels are related to altered corticolimbic response to emotion. Twenty-one healthy HS subjects and 22 healthy subjects with low schizotypy (LS) were selected based on their Oxford and Liverpool Inventory of Feelings and Experiences rating. Glutamate levels were measured in the anterior cingulate cortex (ACC) using proton magnetic resonance spectroscopy, followed by a functional magnetic resonance imaging (fMRI) scan to measure corticolimbic response during emotional processing. fMRI results and fMRI × glutamate interactions were considered significant after voxel-wise P<0.05 family-wise error correction. While viewing emotional pictures, HS individuals showed greater activation than did subjects with LS in the caudate, and marginally in the ACC, hippocampus, medial prefrontal cortex (MPFC) and putamen. Although no between-group differences were found in glutamate concentrations, within the HS group ACC glutamate was negatively correlated with striatal activation (left: z=4.30, P=0.004 and right: z=4.12 P=0.008 caudate; left putamen: z=3.89, P=0.018) and marginally with MPFC (z=3.55, P=0.052) and amygdala (left: z=2.88, P=0.062; right: z=2.79, P=0.079), correlations that were not present in LS subjects. These findings provide, to our knowledge, the first evidence that brain glutamate levels are associated with hyper-responsivity in brain regions thought to be critical in the pathophysiology of psychosis. PMID:28375210

  13. Accounting for Non-Gaussian Sources of Spatial Correlation in Parametric Functional Magnetic Resonance Imaging Paradigms II: A Method to Obtain First-Level Analysis Residuals with Uniform and Gaussian Spatial Autocorrelation Function and Independent and Identically Distributed Time-Series.

    PubMed

    Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Lacey, Simon; Sathian, K

    2018-02-01

    In a recent study Eklund et al. have shown that cluster-wise family-wise error (FWE) rate-corrected inferences made in parametric statistical method-based functional magnetic resonance imaging (fMRI) studies over the past couple of decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; principally because the spatial autocorrelation functions (sACFs) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggest otherwise. Hence, the residuals from general linear model (GLM)-based fMRI activation estimates in these studies may not have possessed a homogenously Gaussian sACF. Here we propose a method based on the assumption that heterogeneity and non-Gaussianity of the sACF of the first-level GLM analysis residuals, as well as temporal autocorrelations in the first-level voxel residual time-series, are caused by unmodeled MRI signal from neuronal and physiological processes as well as motion and other artifacts, which can be approximated by appropriate decompositions of the first-level residuals with principal component analysis (PCA), and removed. We show that application of this method yields GLM residuals with significantly reduced spatial correlation, nearly Gaussian sACF and uniform spatial smoothness across the brain, thereby allowing valid cluster-based FWE-corrected inferences based on assumption of Gaussian spatial noise. We further show that application of this method renders the voxel time-series of first-level GLM residuals independent, and identically distributed across time (which is a necessary condition for appropriate voxel-level GLM inference), without having to fit ad hoc stochastic colored noise models. Furthermore, the detection power of individual subject brain activation analysis is enhanced. This method will be especially useful for case studies, which rely on first-level GLM analysis inferences.

  14. Space-time mesh adaptation for solute transport in randomly heterogeneous porous media.

    PubMed

    Dell'Oca, Aronne; Porta, Giovanni Michele; Guadagnini, Alberto; Riva, Monica

    2018-05-01

    We assess the impact of an anisotropic space and time grid adaptation technique on our ability to solve numerically solute transport in heterogeneous porous media. Heterogeneity is characterized in terms of the spatial distribution of hydraulic conductivity, whose natural logarithm, Y, is treated as a second-order stationary random process. We consider nonreactive transport of dissolved chemicals to be governed by an Advection Dispersion Equation at the continuum scale. The flow field, which provides the advective component of transport, is obtained through the numerical solution of Darcy's law. A suitable recovery-based error estimator is analyzed to guide the adaptive discretization. We investigate two diverse strategies guiding the (space-time) anisotropic mesh adaptation. These are respectively grounded on the definition of the guiding error estimator through the spatial gradients of: (i) the concentration field only; (ii) both concentration and velocity components. We test the approach for two-dimensional computational scenarios with moderate and high levels of heterogeneity, the latter being expressed in terms of the variance of Y. As quantities of interest, we key our analysis towards the time evolution of section-averaged and point-wise solute breakthrough curves, second centered spatial moment of concentration, and scalar dissipation rate. As a reference against which we test our results, we consider corresponding solutions associated with uniform space-time grids whose level of refinement is established through a detailed convergence study. We find a satisfactory comparison between results for the adaptive methodologies and such reference solutions, our adaptive technique being associated with a markedly reduced computational cost. Comparison of the two adaptive strategies tested suggests that: (i) defining the error estimator relying solely on concentration fields yields some advantages in grasping the key features of solute transport taking place within low velocity regions, where diffusion-dispersion mechanisms are dominant; and (ii) embedding the velocity field in the error estimator guiding strategy yields an improved characterization of the forward fringe of solute fronts which propagate through high velocity regions. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Time-wise change in neck pain in response to rehabilitation with specific resistance training: implications for exercise prescription.

    PubMed

    Zebis, Mette K; Andersen, Christoffer H; Sundstrup, Emil; Pedersen, Mogens T; Sjøgaard, Gisela; Andersen, Lars L

    2014-01-01

    To determine the time-wise effect of specific resistance training on neck pain among industrial technicians with frequent neck pain symptoms. Secondary analysis of a parallel-group cluster randomized controlled trial of 20 weeks performed at two large industrial production units in Copenhagen, Denmark. Women with neck pain >30 mm VAS (N = 131) were included in the present analysis. The training group (N = 77) performed specific resistance training for the neck/shoulder muscles three times a week, and the control group (N = 54) received advice to stay active. Participants of both groups registered neck pain intensity (0-100 mm VAS) once a week. Neck pain intensity was 55 mm (SD 23) at baseline. There was a significant group by time interaction for neck pain (F-value 2.61, P<0.001, DF = 19). Between-group differences in neck pain reached significance after 4 weeks (11 mm, 95% CI 2 to 20). The time-wise change in pain showed three phases; a rapid decrease in the training group compared with the control group during the initial 7 weeks, a slower decrease in pain during the following weeks (week 8-15), and a plateau during the last weeks (week 16-20). Adherence to training followed a two-phase pattern, i.e. weekly participation rate was between 70-86% during the initial 7 weeks, dropping towards 55-63% during the latter half of the training period. Four weeks of specific resistance training reduced neck pain significantly, but 15 weeks is required to achieve maximal pain reduction. The time-wise change in pain followed a three-phase pattern with a rapid effect during the initial 7 weeks followed by a slower but still positive effect, and finally a plateau from week 15 and onwards. Decreased participation rate may explain the decreased efficacy during the latter phase of the intervention.

  16. Dihedral Angles As A Diagnostic Tool For Interpreting The Cooling History Of Mafic Rocks

    NASA Astrophysics Data System (ADS)

    Holness, M. B.

    2016-12-01

    The geometry of three-grain junctions in mafic rocks, particularly those involving two grains of plagioclase, overwhelmingly results from processes occurring during solidification. Sub-solidus textural modification is only significant for fine-grained rocks that have remained hot for a considerable time (e.g. chill zones). The underlying control on the geometry of junctions involving plagioclase is the response of the different plagioclase growth faces to changes in cooling rate. This is demonstrated by the systematic co-variation of plagioclase grain shape and the median value of the pyroxene-plag-plag dihedral angle across (unfractionated) mafic sills. In mafic layered intrusions the median dihedral angle is constant across large stretches of stratigraphy, changing in a step-wise manner as the number of liquidus phases changes in the bulk magma. In the Skaergaard layered intrusion, the shape of cumulus plagioclase grains changes smoothly through the stratigraphy, consistent with continuously decreasing cooling rates in a well-mixed chamber: there is no correlation between overall plagioclase grain shape and dihedral angle. However, three-grain junctions are formed during the last stages of crystallization and therefore record events at the base of the crystal mushy layer. While the overall shape of plagioclase grains is dominated by growth at the magma-mush interface or in the bulk magma, it is the post-accumulation overgrowth that creates the dihedral angle: the shape of this overgrowth changes in a step-wise fashion, matching the step-wise variation in dihedral angle. Dihedral angles in layered intrusions can be used to place constraints on the thickness of the mushy layer, using the stratigraphic offset between the step-wise change in dihedral angle and the first appearance/disappearance of the associated liquidus phase. Dihedral angles also have the potential to constrain intrusion size for fragments of cumulate rocks entrained in volcanic ejecta.

  17. Image Based Mango Fruit Detection, Localisation and Yield Estimation Using Multiple View Geometry

    PubMed Central

    Stein, Madeleine; Bargoti, Suchet; Underwood, James

    2016-01-01

    This paper presents a novel multi-sensor framework to efficiently identify, track, localise and map every piece of fruit in a commercial mango orchard. A multiple viewpoint approach is used to solve the problem of occlusion, thus avoiding the need for labour-intensive field calibration to estimate actual yield. Fruit are detected in images using a state-of-the-art faster R-CNN detector, and pair-wise correspondences are established between images using trajectory data provided by a navigation system. A novel LiDAR component automatically generates image masks for each canopy, allowing each fruit to be associated with the corresponding tree. The tracked fruit are triangulated to locate them in 3D, enabling a number of spatial statistics per tree, row or orchard block. A total of 522 trees and 71,609 mangoes were scanned on a Calypso mango orchard near Bundaberg, Queensland, Australia, with 16 trees counted by hand for validation, both on the tree and after harvest. The results show that single, dual and multi-view methods can all provide precise yield estimates, but only the proposed multi-view approach can do so without calibration, with an error rate of only 1.36% for individual trees. PMID:27854271

  18. Hunger, Food Cravings, and Diet Satisfaction are Related to Changes in Body Weight During a 6-Month Behavioral Weight Loss Intervention: The Beef WISE Study.

    PubMed

    Sayer, R Drew; Peters, John C; Pan, Zhaoxing; Wyatt, Holly R; Hill, James O

    2018-05-31

    Previously published findings from the Beef WISE Study (Beef's Role in Weight Improvement, Satisfaction, and Energy) indicated equivalent weight loss between two energy-restricted higher protein (HP) diets: A HP diet with ≥4 weekly servings of lean beef (B; n = 60) and a HP diet restricted in all red meats (NB; n = 60). Long-term adherence to dietary prescriptions is critical for weight management but may be adversely affected by changes in appetite, food cravings, and diet satisfaction that often accompany weight loss. A secondary a priori aim of the Beef WISE Study was to compare subjective ratings of appetite (hunger and fullness), food cravings, and diet satisfaction (compliance, satisfaction, and deprivation) between the diets and determine whether these factors influenced weight loss. Subjective appetite, food cravings, and diet satisfaction ratings were collected throughout the intervention, and body weight was measured at the baseline, after the weight loss intervention (week 16), and after an eight-week follow-up period (week 24). Hunger and cravings were reduced during weight loss compared to the baseline, while fullness was not different from the baseline. The reduction in cravings was greater for B vs. NB at week 16 only. Higher deprivation ratings during weight loss were reported in NB vs. B at weeks 16 and 24, but participants in both groups reported high levels of compliance and diet satisfaction with no difference between groups. Independent of group assignment, higher baseline hunger and cravings were associated with less weight loss, and greater diet compliance, diet satisfaction, and lower feelings of deprivation were associated with greater weight loss. Strategies to promote reduced feelings of hunger, cravings, and deprivation may increase adherence to dietary prescriptions and improve behavioral weight loss outcomes.

  19. Intersatellite Calibration of Microwave Radiometers for GPM

    NASA Astrophysics Data System (ADS)

    Wilheit, T. T.

    2010-12-01

    The aim of the GPM mission is to measure precipitation globally with high temporal resolution by using a constellation of satellites logically united by the GPM Core Satellite which will be in a non-sunsynchronous, medium inclination orbit. The usefulness of the combined product depends on the consistency of precipitation retrievals from the various microwave radiometers. The calibration requirements for this consistency are quite daunting requiring a multi-layered approach. The radiometers can vary considerably in their frequencies, view angles, polarizations and spatial resolutions depending on their primary application and other constraints. The planned parametric algorithms will correct for the varying viewing parameters, but they are still vulnerable to calibration errors, both relative and absolute. The GPM Intersatellite Calibration Working Group (aka X-CAL) will adjust the calibration of all the radiometers to a common consensus standard for the GPM Level 1C product to be used in precipitation retrievals. Finally, each Precipitation Algorithm Working Group must have its own strategy for removing the residual errors. If the final adjustments are small, the credibility of the precipitation retrievals will be enhanced. Before intercomparing, the radiometers must be self consistent on a scan-wise and orbit-wise basis. Pre-screening for this consistency constitutes the first step in the intercomparison. The radiometers are then compared pair-wise with the microwave radiometer (GMI) on the GPM Core Satellite. Two distinct approaches are used for sake of cross-checking the results. On the one hand, nearly simultaneous observations are collected at the cross-over points of the orbits and the observations of one are converted to virtual observations of the other using a radiative transfer model to permit comparisons. The complementary approach collects histograms of brightness temperature from each instrument. In each case a model is needed to translate the observations from one set of viewing parameters to those of the GMI. For the conically scanning window channel radiometers, the models are reasonably complete. Currently we have compared TMI with Windsat and arrived at a preliminary consensus calibration based on the pair. This consensus calibration standard has been applied to TMI and is currently being compared with AMSR-E on the Aqua satellite. In this way we are implementing a rolling wave spin-up of X-CAL. In this sense, the launch of GPM core will simply provide one more radiometer to the constellation; one hopes it will be the best calibrated. Water vapor and temperature sounders will use a different scenario. Some of the precipitation retrieval algorithms will use sounding channels. The GMI will include typical water vapor sounding channels. The radiances are ingested directly via 3DVAR and 4DVAR techniques into forecast models by many operational weather forecast agencies. The residuals and calibration adjustments of this process will provide a measure of the relative calibration errors throughout the constellation. The use of the ARM Southern Great Plains site as a benchmark for calibrating the more opaque channels is also being investigated.

  20. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The non-linear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  1. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The nonlinear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  2. Some error bounds for K-iterated Gaussian recursive filters

    NASA Astrophysics Data System (ADS)

    Cuomo, Salvatore; Galletti, Ardelio; Giunta, Giulio; Marcellino, Livia

    2016-10-01

    Recursive filters (RFs) have achieved a central role in several research fields over the last few years. For example, they are used in image processing, in data assimilation and in electrocardiogram denoising. More in particular, among RFs, the Gaussian RFs are an efficient computational tool for approximating Gaussian-based convolutions and are suitable for digital image processing and applications of the scale-space theory. As is a common knowledge, the Gaussian RFs, applied to signals with support in a finite domain, generate distortions and artifacts, mostly localized at the boundaries. Heuristic and theoretical improvements have been proposed in literature to deal with this issue (namely boundary conditions). They include the case in which a Gaussian RF is applied more than once, i.e. the so called K-iterated Gaussian RFs. In this paper, starting from a summary of the comprehensive mathematical background, we consider the case of the K-iterated first-order Gaussian RF and provide the study of its numerical stability and some component-wise theoretical error bounds.

  3. Cool Astronomy: Education and Public Outreach for the WISE mission

    NASA Astrophysics Data System (ADS)

    Mendez, Bryan J.

    2011-01-01

    The Education and Public Outreach (E/PO) program of the Wide-field Infrared Survey Explorer (WISE) aims to educate and engage students, teachers, and the general public in the endeavor of science. We bring a collection of accomplished professionals in formal and informal astronomy education from around the nation to create learning materials and experiences that appeal to broad audiences. Our E/PO program trains teachers in science, technology, engineering, and mathematics (STEM) topics related to WISE; creates standards-based classroom resources and lessons using WISE data and WISE-related STEM topics; develops interactive programming for museums and science centers; and inspires the public with WISE science and images.

  4. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    NASA Astrophysics Data System (ADS)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  5. Effect of endorectal balloon positioning errors on target deformation and dosimetric quality during prostate SBRT

    NASA Astrophysics Data System (ADS)

    Jones, Bernard L.; Gan, Gregory; Kavanagh, Brian; Miften, Moyed

    2013-11-01

    An inflatable endorectal balloon (ERB) is often used during stereotactic body radiation therapy (SBRT) for treatment of prostate cancer in order to reduce both intrafraction motion of the target and risk of rectal toxicity. However, the ERB can exert significant force on the prostate, and this work assessed the impact of ERB position errors on deformation of the prostate and treatment dose metrics. Seventy-one cone-beam computed tomography (CBCT) image datasets of nine patients with clinical stage T1cN0M0 prostate cancer were studied. An ERB (Flexi-Cuff, EZ-EM, Westbury, NY) inflated with 60 cm3 of air was used during simulation and treatment, and daily kilovoltage (kV) CBCT imaging was performed to localize the prostate. The shape of the ERB in each CBCT was analyzed to determine errors in position, size, and shape. A deformable registration algorithm was used to track the dose received by (and deformation of) the prostate, and dosimetric values such as D95, PTV coverage, and Dice coefficient for the prostate were calculated. The average balloon position error was 0.5 cm in the inferior direction, with errors ranging from 2 cm inferiorly to 1 cm superiorly. The prostate was deformed primarily in the AP direction, and tilted primarily in the anterior-posterior/superior-inferior plane. A significant correlation was seen between errors in depth of ERB insertion (DOI) and mean voxel-wise deformation, prostate tilt, Dice coefficient, and planning-to-treatment prostate inter-surface distance (p < 0.001). Dosimetrically, DOI is negatively correlated with prostate D95 and PTV coverage (p < 0.001). For the model of ERB studied, error in ERB position can cause deformations in the prostate that negatively affect treatment, and this additional aspect of setup error should be considered when ERBs are used for prostate SBRT. Before treatment, the ERB position should be verified, and the ERB should be adjusted if the error is observed to exceed tolerable values.

  6. A catalogue of clusters of galaxies identified from all sky surveys of 2MASS, WISE, and SuperCOSMOS

    NASA Astrophysics Data System (ADS)

    Wen, Z. L.; Han, J. L.; Yang, F.

    2018-03-01

    We identify 47 600 clusters of galaxies from photometric data of Two Micron All Sky Survey (2MASS), Wide-field Infrared Survey Explorer (WISE), and SuperCOSMOS, among which 26 125 clusters are recognized for the first time and mostly in the sky outside the Sloan Digital Sky Survey (SDSS) area. About 90 per cent of massive clusters of M500 > 3 × 1014 M⊙ in the redshift range of 0.025 < z < 0.3 have been detected from such survey data, and the detection rate drops down to 50 per cent for clusters with a mass of M500 ˜ 1 × 1014 M⊙. Monte Carlo simulations show that the false detection rate for the whole cluster sample is less than 5 per cent. By cross-matching with ROSAT and XMM-Newton sources, we get 779 new X-ray cluster candidates which have X-ray counterparts within a projected offset of 0.2 Mpc.

  7. Public awareness and misunderstanding about DrinkWise Australia: a cross-sectional survey of Australian adults.

    PubMed

    Brennan, Emily; Wakefield, Melanie A; Durkin, Sarah J; Jernigan, David H; Dixon, Helen G; Pettigrew, Simone

    2017-08-01

    DrinkWise Australia is an alcohol industry Social Aspects/Public Relations Organisation (SAPRO). We assessed the Australian public's awareness of DrinkWise, beliefs about its funding source, and associations between funding beliefs and perceptions of DrinkWise. A total of 467 adult weekly drinkers completed an online cross-sectional survey in February 2016. Half the sample had heard of DrinkWise (48.6%); of these, the proportion aware that DrinkWise is industry funded (37.0%) was much smaller than the proportion believing it receives government funding (84.1%). Respondents who incorrectly believed DrinkWise receives government funding were more likely to hold a favourable perception of the organisation's credibility, trustworthiness and respectability than those who did not believe it receives government funding (75.9% vs. 58.3%; p=0.032). The drinking population is vulnerable to believing that alcohol industry public relations organisations such as DrinkWise are government funded, which in turn is associated with more favourable perceptions of the organisation's credibility, trustworthiness, and respectability. Implications for public health: Favourable perceptions of DrinkWise may enhance the industry's ability to delay or dilute potentially effective alcohol control policies. Future research should investigate whether educating the public about DrinkWise's alcohol industry funding alters the public's perception of how credible, trustworthy and respectable the organisation is. © 2017 The Authors.

  8. Expression of Wise in chick embryos.

    PubMed

    Shigetani, Y; Itasaki, N

    2007-08-01

    We have performed in situ hybridization to study the expression of Wise in early chick embryos. Wise expression is first detectable in the ectoderm at posterior levels of late neurula. As development proceeds, Wise expression is seen in specific patterns in the ectoderm of the trunk region, pharyngeal arches, limb buds, and feather buds. In addition to these areas, particular cartilages such as the ones in the maxillary process and limbs start to express Wise at the late pharyngula stage, and the expression in these cartilages becomes stronger than that in epidermal components at later stages. Importantly, Wise is expressed in regions where other signaling molecules such as Wnt, Bmp, and Shh are known to function in morphogenesis and differentiation. Direct comparisons of the expression of Wise and these genes are also demonstrated. (c) 2007 Wiley-Liss, Inc.

  9. WISE PHOTOMETRY FOR 400 MILLION SDSS SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lang, Dustin; Hogg, David W.; Schlegel, David J., E-mail: dstndstn@gmail.com

    2016-02-15

    We present photometry of images from the Wide-Field Infrared Survey Explorer (WISE) of over 400 million sources detected by the Sloan Digital Sky Survey (SDSS). We use a “forced photometry” technique, using measured SDSS source positions, star–galaxy classification, and galaxy profiles to define the sources whose fluxes are to be measured in the WISE images. We perform photometry with The Tractor image modeling code, working on our “unWISE” coaddds and taking account of the WISE point-spread function and a noise model. The result is a measurement of the flux of each SDSS source in each WISE band. Many sources havemore » little flux in the WISE bands, so often the measurements we report are consistent with zero given our uncertainties. However, for many sources we get 3σ or 4σ measurements; these sources would not be reported by the “official” WISE pipeline and will not appear in the WISE catalog, yet they can be highly informative for some scientific questions. In addition, these small-signal measurements can be used in stacking analyses at the catalog level. The forced photometry approach has the advantage that we measure a consistent set of sources between SDSS and WISE, taking advantage of the resolution and depth of the SDSS images to interpret the WISE images; objects that are resolved in SDSS but blended together in WISE still have accurate measurements in our photometry. Our results, and the code used to produce them, are publicly available at http://unwise.me.« less

  10. Inhibition of WISE preserves renal allograft function.

    PubMed

    Qian, Xueming; Yuan, Xiaodong; Vonderfecht, Steven; Ge, Xupeng; Lee, Jae; Jurisch, Anke; Zhang, Li; You, Andrew; Fitzpatrick, Vincent D; Williams, Alexia; Valente, Eliane G; Pretorius, Jim; Stevens, Jennitte L; Tipton, Barbara; Winters, Aaron G; Graham, Kevin; Harriss, Lindsey; Baker, Daniel M; Damore, Michael; Salimi-Moosavi, Hossein; Gao, Yongming; Elkhal, Abdallah; Paszty, Chris; Simonet, W Scott; Richards, William G; Tullius, Stefan G

    2013-01-01

    Wnt-modulator in surface ectoderm (WISE) is a secreted modulator of Wnt signaling expressed in the adult kidney. Activation of Wnt signaling has been observed in renal transplants developing interstitial fibrosis and tubular atrophy; however, whether WISE contributes to chronic changes is not well understood. Here, we found moderate to high expression of WISE mRNA in a rat model of renal transplantation and in kidneys from normal rats. Treatment with a neutralizing antibody against WISE improved proteinuria and graft function, which correlated with higher levels of β-catenin protein in kidney allografts. In addition, treatment with the anti-WISE antibody reduced infiltration of CD68(+) macrophages and CD8(+) T cells, attenuated glomerular and interstitial injury, and decreased biomarkers of renal injury. This treatment reduced expression of genes involved in immune responses and in fibrogenic pathways. In summary, WISE contributes to renal dysfunction by promoting tubular atrophy and interstitial fibrosis.

  11. DriveWise: an interdisciplinary hospital-based driving assessment program.

    PubMed

    O'Connor, Margaret G; Kapust, Lissa R; Hollis, Ann M

    2008-01-01

    Health care professionals working with the elderly have opportunities through research and clinical practice to shape public policy affecting the older driver. This article describes DriveWise, an interdisciplinary hospital-based driving assessment program developed in response to clinical concerns about the driving safety of individuals with medical conditions. DriveWise clinicians use evidence-based, functional assessments to determine driving competence. In addition, the program was designed to meet the emotional needs of individuals whose driving safety has been called into question. To date, approximately 380 participants have been assessed through DriveWise. The following report details the DriveWise mission, DriveWise team members, and road test results. We continue to refine the assessment process to promote safety and support the dignity and independence of all participants. The DriveWise interdisciplinary approach to practice is a concrete example of how gerontological education across professions can have direct benefits to the older adult.

  12. Measuring the Test-Wiseness of Medical Students.

    ERIC Educational Resources Information Center

    Harvill, Leo M.

    The objectives for this study were to: (1) develop a valid, reliable measure of test-wiseness with equivalent forms for use with students in the health sciences; and (2) determine the level of test-wiseness of entering medical students. The test-wiseness areas included in this study were: similar options, umbrella term, item give-away, convergence…

  13. Best (but oft-forgotten) practices: the multiple problems of multiplicity-whether and how to correct for many statistical tests.

    PubMed

    Streiner, David L

    2015-10-01

    Testing many null hypotheses in a single study results in an increased probability of detecting a significant finding just by chance (the problem of multiplicity). Debates have raged over many years with regard to whether to correct for multiplicity and, if so, how it should be done. This article first discusses how multiple tests lead to an inflation of the α level, then explores the following different contexts in which multiplicity arises: testing for baseline differences in various types of studies, having >1 outcome variable, conducting statistical tests that produce >1 P value, taking multiple "peeks" at the data, and unplanned, post hoc analyses (i.e., "data dredging," "fishing expeditions," or "P-hacking"). It then discusses some of the methods that have been proposed for correcting for multiplicity, including single-step procedures (e.g., Bonferroni); multistep procedures, such as those of Holm, Hochberg, and Šidák; false discovery rate control; and resampling approaches. Note that these various approaches describe different aspects and are not necessarily mutually exclusive. For example, resampling methods could be used to control the false discovery rate or the family-wise error rate (as defined later in this article). However, the use of one of these approaches presupposes that we should correct for multiplicity, which is not universally accepted, and the article presents the arguments for and against such "correction." The final section brings together these threads and presents suggestions with regard to when it makes sense to apply the corrections and how to do so. © 2015 American Society for Nutrition.

  14. The Faintest WISE Debris Disks: Enhanced Methods for Detection and Verification

    NASA Astrophysics Data System (ADS)

    Patel, Rahul I.; Metchev, Stanimir A.; Heinze, Aren; Trollo, Joseph

    2017-02-01

    In an earlier study, we reported nearly 100 previously unknown dusty debris disks around Hipparcos main-sequence stars within 75 pc by selecting stars with excesses in individual WISE colors. Here, we further scrutinize the Hipparcos 75 pc sample to (1) gain sensitivity to previously undetected, fainter mid-IR excesses and (2) remove spurious excesses contaminated by previously unidentified blended sources. We improve on our previous method by adopting a more accurate measure of the confidence threshold for excess detection and by adding an optimally weighted color average that incorporates all shorter-wavelength WISE photometry, rather than using only individual WISE colors. The latter is equivalent to spectral energy distribution fitting, but only over WISE bandpasses. In addition, we leverage the higher-resolution WISE images available through the unWISE.me image service to identify contaminated WISE excesses based on photocenter offsets among the W3- and W4-band images. Altogether, we identify 19 previously unreported candidate debris disks. Combined with the results from our earlier study, we have found a total of 107 new debris disks around 75 pc Hipparcos main-sequence stars using precisely calibrated WISE photometry. This expands the 75 pc debris disk sample by 22% around Hipparcos main-sequence stars and by 20% overall (including non-main-sequence and non-Hipparcos stars).

  15. Identification of coffee bean varieties using hyperspectral imaging: influence of preprocessing methods and pixel-wise spectra analysis.

    PubMed

    Zhang, Chu; Liu, Fei; He, Yong

    2018-02-01

    Hyperspectral imaging was used to identify and to visualize the coffee bean varieties. Spectral preprocessing of pixel-wise spectra was conducted by different methods, including moving average smoothing (MA), wavelet transform (WT) and empirical mode decomposition (EMD). Meanwhile, spatial preprocessing of the gray-scale image at each wavelength was conducted by median filter (MF). Support vector machine (SVM) models using full sample average spectra and pixel-wise spectra, and the selected optimal wavelengths by second derivative spectra all achieved classification accuracy over 80%. Primarily, the SVM models using pixel-wise spectra were used to predict the sample average spectra, and these models obtained over 80% of the classification accuracy. Secondly, the SVM models using sample average spectra were used to predict pixel-wise spectra, but achieved with lower than 50% of classification accuracy. The results indicated that WT and EMD were suitable for pixel-wise spectra preprocessing. The use of pixel-wise spectra could extend the calibration set, and resulted in the good prediction results for pixel-wise spectra and sample average spectra. The overall results indicated the effectiveness of using spectral preprocessing and the adoption of pixel-wise spectra. The results provided an alternative way of data processing for applications of hyperspectral imaging in food industry.

  16. More Than a Pretty Picture: Making WISE Data Accessible to the Public

    NASA Astrophysics Data System (ADS)

    Ali, Nancy; Mendez, B.; Fricke, K.; Wright, E. L.; Eisenhardt, P. R.; Cutri, R. M.; Hurt, R.; WISE Team

    2011-01-01

    NASA's Wide-field Infrared Survey Explorer (WISE) has surveyed the sky in four bands of infrared light, creating a treasure trove of data. This data is of interest not only to the professional astronomical community, but also to educators, students and the general public. The Education and Public Outreach (E/PO) program for WISE is creating opportunities to make WISE data accessible to these audiences through the Internet as well as through teacher professional development programs. Shortly after WISE took its first light image in January 2010, images have been featured weekly on the WISE website. These images serve to engage the general public through "pretty pictures” that are accompanied by educational captions. Social media such as Facebook and Twitter are used to further engage the public with the images. For a more comprehensive view of WISE images, we are creating a guided tour of the infrared sky on the WorldWide Telescope. The public will be able to use the free WorldWide Telescope software to interact with WISE images and listen to narration that describes features of the Universe as seen in infrared light. We are also developing resources for teachers and students to access WISE data when in becomes public in 2011 to learn about astronomical imaging and to conduct authentic scientific investigations.

  17. Three new cool brown dwarfs discovered with the wide-field infrared survey explorer (WISE) and an improved spectrum of the Y0 dwarf wise J041022.71+150248.4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cushing, Michael C.; Kirkpatrick, J. Davy; Gelino, Christopher R.

    2014-05-01

    As part of a larger search of Wide-field Infrared Survey Explorer (WISE) data for cool brown dwarfs with effective temperatures less than 1000 K, we present the discovery of three new cool brown dwarfs with spectral types later than T7. Using low-resolution, near-infrared spectra obtained with the NASA Infrared Telescope Facility and the Hubble Space Telescope, we derive spectral types of T9.5 for WISE J094305.98+360723.5, T8 for WISE J200050.19+362950.1, and Y0: for WISE J220905.73+271143.9. The identification of WISE J220905.73+271143.9 as a Y dwarf brings the total number of spectroscopically confirmed Y dwarfs to 17. In addition, we present an improvedmore » spectrum (i.e., higher signal-to-noise ratio) of the Y0 dwarf WISE J041022.71+150248.4 that confirms the Cushing et al. classification of Y0. Spectrophotometric distance estimates place all three new brown dwarfs at distances less than 12 pc, with WISE J200050.19+362950.1 lying at a distance of only 3.9-8.0 pc. Finally, we note that brown dwarfs like WISE J200050.19+362950.1 that lie in or near the Galactic plane offer an exciting opportunity to directly measure the mass of a brown dwarf via astrometric microlensing.« less

  18. MEDWISE: an innovative public health information system infrastructure.

    PubMed

    Sahin, Yasar Guneri; Celikkan, Ufuk

    2012-06-01

    In this paper, we present MedWise, a high level design of a medical information infrastructure, and its architecture. The proposed system offers a comprehensive, modular, robust and extensible infrastructure to be used in public health care systems. The system gathers reliable and evidence based health data, which it then classifies, interprets and stores into a particular database. It creates a healthcare ecosystem that aids the medical community by providing for less error prone diagnoses and treatment of diseases. This system will be standards-compliant; therefore it would be complementary to the existing healthcare and clinical information systems. The key objective of the proposed system is to provide as much medical historical and miscellaneous data as possible about the patients with minimal consultation, thus allowing physicians to easily access Patients' Ancillary Data (PAD) such as hereditary, residential, travel, custom, meteorological, biographical and demographical data before the consultation. In addition, the system can help to diminish problems and misdiagnosis situations caused by language barriers-disorders and misinformation. MedWise can assist physicians to shorten time for diagnosis and consultations, therefore dramatically improving quality and quantity of the physical examinations of patients. Furthermore, since it intends to supply a significant amount of data, it may be used to improve skills of students in medical education.

  19. Drop-wise and film-wise water condensation processes occurring on metallic micro-scaled surfaces

    NASA Astrophysics Data System (ADS)

    Starostin, Anton; Valtsifer, Viktor; Barkay, Zahava; Legchenkova, Irina; Danchuk, Viktor; Bormashenko, Edward

    2018-06-01

    Water condensation was studied on silanized (superhydrophobic) and fluorinated (superoleophobic) micro-rough aluminum surfaces of the same topography. Condensation on superhydrophobic surfaces occurred via film-wise mechanism, whereas on superoleophobic surfaces it was drop-wise. The difference in the pathways of condensation was attributed to the various energy barriers separating the Cassie and Wenzel wetting states on the investigated surfaces. The higher barriers inherent for superoleophobic surfaces promoted the drop-wise condensation. Triple-stage kinetics of growth of droplets condensed on superoleophobic surfaces is reported and discussed.

  20. MIDAS: Regionally linear multivariate discriminative statistical mapping.

    PubMed

    Varol, Erdem; Sotiras, Aristeidis; Davatzikos, Christos

    2018-07-01

    Statistical parametric maps formed via voxel-wise mass-univariate tests, such as the general linear model, are commonly used to test hypotheses about regionally specific effects in neuroimaging cross-sectional studies where each subject is represented by a single image. Despite being informative, these techniques remain limited as they ignore multivariate relationships in the data. Most importantly, the commonly employed local Gaussian smoothing, which is important for accounting for registration errors and making the data follow Gaussian distributions, is usually chosen in an ad hoc fashion. Thus, it is often suboptimal for the task of detecting group differences and correlations with non-imaging variables. Information mapping techniques, such as searchlight, which use pattern classifiers to exploit multivariate information and obtain more powerful statistical maps, have become increasingly popular in recent years. However, existing methods may lead to important interpretation errors in practice (i.e., misidentifying a cluster as informative, or failing to detect truly informative voxels), while often being computationally expensive. To address these issues, we introduce a novel efficient multivariate statistical framework for cross-sectional studies, termed MIDAS, seeking highly sensitive and specific voxel-wise brain maps, while leveraging the power of regional discriminant analysis. In MIDAS, locally linear discriminative learning is applied to estimate the pattern that best discriminates between two groups, or predicts a variable of interest. This pattern is equivalent to local filtering by an optimal kernel whose coefficients are the weights of the linear discriminant. By composing information from all neighborhoods that contain a given voxel, MIDAS produces a statistic that collectively reflects the contribution of the voxel to the regional classifiers as well as the discriminative power of the classifiers. Critically, MIDAS efficiently assesses the statistical significance of the derived statistic by analytically approximating its null distribution without the need for computationally expensive permutation tests. The proposed framework was extensively validated using simulated atrophy in structural magnetic resonance imaging (MRI) and further tested using data from a task-based functional MRI study as well as a structural MRI study of cognitive performance. The performance of the proposed framework was evaluated against standard voxel-wise general linear models and other information mapping methods. The experimental results showed that MIDAS achieves relatively higher sensitivity and specificity in detecting group differences. Together, our results demonstrate the potential of the proposed approach to efficiently map effects of interest in both structural and functional data. Copyright © 2018. Published by Elsevier Inc.

  1. An adaptive two-stage dose-response design method for establishing proof of concept.

    PubMed

    Franchetti, Yoko; Anderson, Stewart J; Sampson, Allan R

    2013-01-01

    We propose an adaptive two-stage dose-response design where a prespecified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish "global" PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs.

  2. Improving the Clinical Skills Performance of Graduating Medical Students Using “WISE OnCall,” a Multimedia Educational Module

    PubMed Central

    Szyld, Demian; Uquillas, Kristen; Green, Brad R.; Yavner, Steven D.; Song, Hyuksoon; Nick, Michael W.; Ng, Grace M.; Pusic, Martin V.; Riles, Thomas S.; Kalet, Adina

    2017-01-01

    Introduction “Transitions to residency” programs are designed to maximize quality and safety of patient care, as medical students become residents. However, best instructional or readiness assessment practices are not yet established. We sought to study the impact of a screen-based interactive curriculum designed to prepare interns to address common clinical coverage issues (WISE OnCall) on the clinical skills demonstrated in simulation and hypothesize that performance would improve after completing the module. Methods Senior medical students were recruited to participate in this single group prestudy/poststudy. Students responded to a call from a standardized nurse (SN) and assessed a standardized patient (SP) with low urine output, interacted with a 45-minute WISE OnCall module on the assessment and management of oliguria, and then evaluated a different SP with low urine output of a different underlying cause. Standardized patients assessed clinical skills with a 37-item, behaviorally anchored checklist measuring clinical skills (intraclass correlation coefficient [ICC], 0.55–0.81). Standardized nurses rated care quality and safety and collaboration and interprofessional communication using a 33-item literature-based, anchored checklist (ICC, 0.47–0.52). Standardized patient and SN ratings of the same student performance were correlated (r, 0.37–0.62; P < 0.01). Physicians assessed clinical reasoning quality based on the students’ patient encounter note (ICC, 0.55–0.68), ratings that did not correlate with SP and SN ratings. We compared pre-post clinical skills performance and clinical reasoning. Fifty-two medical students (31%) completed this institutional review board –approved study. Results Performance as measured by the SPs, SNs, and the postencounter note all showed improvement with mostly moderate to large effect sizes (range of Cohen’s d, 0.30–1.88; P < 0.05) after completion of the online module. Unexpectedly, professionalism as rated by the SP was poorer after the module (Cohen’s d, −0.93; P = 0.000). Discussion A brief computer-based educational intervention significantly improved graduating medical students' clinical skills needed to be ready for residency. PMID:29076970

  3. Lessions learned in WISE image quality

    NASA Astrophysics Data System (ADS)

    Kendall, Martha; Duval, Valerie G.; Larsen, Mark F.; Heinrichsen, Ingolf H.; Esplin, Roy W.; Shannon, Mark; Wright, Edward L.

    2010-08-01

    The Wide-Field Infrared Survey Explorer (WISE) mission launched in December of 2009 is a true success story. The mission is performing beyond expectations on-orbit and maintained cost and schedule throughout. How does such a thing happen? A team constantly focused on mission success is a key factor. Mission success is more than a program meeting its ultimate science goals; it is also meeting schedule and cost goals to avoid cancellation. The WISE program can attribute some of its success in achieving the image quality needed to meet science goals to lessons learned along the way. A requirement was missed in early decomposition, the absence of which would have adversely affected end-to-end system image quality. Fortunately, the ability of the cross-organizational team to focus on fixing the problem without pointing fingers or waiting for paperwork was crucial in achieving a timely solution. Asking layman questions early in the program could have revealed requirement flowdown misunderstandings between spacecraft control stability and image processing needs. Such is the lesson learned with the WISE spacecraft Attitude Determination & Control Subsystem (ADCS) jitter control and the image data reductions needs. Spacecraft motion can affect image quality in numerous ways. Something as seemingly benign as different terminology being used by teammates in separate groups working on data reduction, spacecraft ADCS, the instrument, mission operations, and the science proved to be a risk to system image quality. While the spacecraft was meeting the allocated jitter requirement , the drift rate variation need was not being met. This missing need was noticed about a year before launch and with a dedicated team effort, an adjustment was made to the spacecraft ADCS control. WISE is meeting all image quality requirements on-orbit thanks to a diligent team noticing something was missing before it was too late and applying their best effort to find a solution.

  4. Increasing Success Rates in Developmental Math: The Complementary Role of Individual and Institutional Characteristics

    ERIC Educational Resources Information Center

    Fong, Kristen E.; Melguizo, Tatiana; Prather, George

    2015-01-01

    This study tracks students' progression through developmental math sequences and defines progression as both attempting and passing each level of the sequence. A model of successful progression in developmental education was built utilizing individual-, institutional-, and developmental math-level factors. Employing step-wise logistic regression…

  5. High Schools at the Tipping Point

    ERIC Educational Resources Information Center

    Wise, Bob

    2008-01-01

    The U. S. high school system is in crisis, Wise argues. More than 1 million students drop out every year, significant numbers of college freshmen require remedial courses to handle college work, and employers consistently express disappointment in the skills of graduates. The high dropout rate and students' lack of preparedness carries…

  6. Wise promotes coalescence of cells of neural crest and placode origins in the trigeminal region during head development.

    PubMed

    Shigetani, Yasuyo; Howard, Sara; Guidato, Sonia; Furushima, Kenryo; Abe, Takaya; Itasaki, Nobue

    2008-07-15

    While most cranial ganglia contain neurons of either neural crest or placodal origin, neurons of the trigeminal ganglion derive from both populations. The Wnt signaling pathway is known to be required for the development of neural crest cells and for trigeminal ganglion formation, however, migrating neural crest cells do not express any known Wnt ligands. Here we demonstrate that Wise, a Wnt modulator expressed in the surface ectoderm overlying the trigeminal ganglion, play a role in promoting the assembly of placodal and neural crest cells. When overexpressed in chick, Wise causes delamination of ectodermal cells and attracts migrating neural crest cells. Overexpression of Wise is thus sufficient to ectopically induce ganglion-like structures consisting of both origins. The function of Wise is likely synergized with Wnt6, expressed in an overlapping manner with Wise in the surface ectoderm. Electroporation of morpholino antisense oligonucleotides against Wise and Wnt6 causes decrease in the contact of neural crest cells with the delaminated placode-derived cells. In addition, targeted deletion of Wise in mouse causes phenotypes that can be explained by a decrease in the contribution of neural crest cells to the ophthalmic lobe of the trigeminal ganglion. These data suggest that Wise is able to function cell non-autonomously on neural crest cells and promote trigeminal ganglion formation.

  7. A Comparison of Online, Video Synchronous, and Traditional Learning Modes for an Introductory Undergraduate Physics Course

    NASA Astrophysics Data System (ADS)

    Faulconer, E. K.; Griffith, J.; Wood, B.; Acharyya, S.; Roberts, D.

    2018-05-01

    While the equivalence between online and traditional classrooms has been well-researched, very little of this includes college-level introductory Physics. Only one study explored Physics at the whole-class level rather than specific course components such as a single lab or a homework platform. In this work, we compared the failure rate, grade distribution, and withdrawal rates in an introductory undergraduate Physics course across several learning modes including traditional face-to-face instruction, synchronous video instruction, and online classes. Statistically significant differences were found for student failure rates, grade distribution, and withdrawal rates but yielded small effect sizes. Post-hoc pair-wise test was run to determine differences between learning modes. Online students had a significantly lower failure rate than students who took the class via synchronous video classroom. While statistically significant differences were found for grade distributions, the pair-wise comparison yielded no statistically significance differences between learning modes when using the more conservative Bonferroni correction in post-hoc testing. Finally, in this study, student withdrawal rates were lowest for students who took the class in person (in-person classroom and synchronous video classroom) than online. Students that persist in an online introductory Physics class are more likely to achieve an A than in other modes. However, the withdrawal rate is higher from online Physics courses. Further research is warranted to better understand the reasons for higher withdrawal rates in online courses. Finding the root cause to help eliminate differences in student performance across learning modes should remain a high priority for education researchers and the education community as a whole.

  8. Explanatory Supplement to the WISE All-Sky Release Products

    NASA Technical Reports Server (NTRS)

    2012-01-01

    The Wide-field Infrared Survey Explorer (WISE; Wright et al. 2010) surveyed the entire sky at 3.4, 4.6, 12 and 22 microns in 2010, achieving 5-sigma point source sensitivities per band better than 0.08, 0.11, 1 and 6 mJy in unconfused regions on the ecliptic. The WISE All-Sky Data Release, conducted on March 14, 2012, incorporates all data taken during the full cryogenic mission phase, 7 January 2010 to 6 August 20l0,that were processed with improved calibrations and reduction algorithms. Release data products include: (1) an Atlas of 18,240 match-filtered, calibrated and coadded image sets; (2) a Source Catalog containing positions and four-band photometry for over 563 million objects, and (3) an Explanatory Supplement. Ancillary products include a Reject Table that contains 284 million detections that were not selected for the Source Catalog because they are low signal-to-noise ratio or spurious detections of image artifacts, an archive of over 1.5 million sets of calibrated WISE Single-exposure images, and a database of 9.4 billion source extractions from those single images, and moving object tracklets identified by the NEOWISE program (Mainzer et aI. 2011). The WISE All-Sky Data Release products supersede those from the WISE Preliminary Data Release (Cutri et al. 2011). The Explanatory Supplement to the WISE All-Sky Data Release Products is a general guide for users of the WISE data. The Supplement contains an overview of the WISE mission, facilities, and operations, a detailed description of WISE data processing algorithms, a guide to the content and formals of the image and tabular data products, and cautionary notes that describe known limitations of the All-Sky Release products. Instructions for accessing the WISE data products via the services of the NASA/IPAC Infrared Science Archive are provided. The Supplement also provides analyses of the achieved sky coverage, photometric and astrometric characteristics and completeness and reliability of the All-Sky Release data products. The WISE All-Sky Release Explanatory Supplement is an on-line document that is updated frequently to provide the most current information for users of the WISE data products. The Explanatory Supplement is maintained at: http://wise2.ipac.caltech.edu/docs/release/allsky/expsup/index.html WISE is a joint project of the University of California, Los Angeles and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. NEOWISE is a project of the Jet Propulsion Laboratory/California Institute of Technology, funded by the Planetary Science Division of the National Aeronautics and Space Administration.

  9. Teachers and Technology Use in Secondary Science Classrooms: Investigating the Experiences of Middle School Science Teachers Implementing the Web-based Inquiry Science Environment (WISE)

    NASA Astrophysics Data System (ADS)

    Schulz, Rachel Corinne

    This study investigated the intended teacher use of a technology-enhanced learning tool, Web-based Inquiry Science Environment (WISE), and the first experiences of teachers new to using it and untrained in its use. The purpose of the study was to learn more about the factors embedded into the design of the technology that enabled it or hindered it from being used as intended. The qualitative research design applied grounded theory methods. Using theoretical sampling and a constant comparative analysis, a document review of WISE website led to a model of intended teacher use. The experiences of four middle school science teachers as they enacted WISE for the first time were investigated through ethnographic field observations, surveys and interviews using thematic analysis to construct narratives of each teachers use. These narratives were compared to the model of intended teacher use of WISE. This study found two levels of intended teacher uses for WISE. A basic intended use involved having student running the project to completion while the teacher provides feedback and assesses student learning. A more optimal description of intended use involved the supplementing the core curriculum with WISE as well as enhancing the core scope and sequence of instruction and aligning assessment with the goals of instruction through WISE. Moreover, WISE projects were optimally intended to be facilitated through student-centered teaching practices and inquiry-based instruction in a collaborative learning environment. It is also optimally intended for these projects to be shared with other colleagues for feedback and iterative development towards improving the Knowledge Integration of students. Of the four teachers who participated in this study, only one demonstrated the use of WISE as intended in the most basic way. This teacher also demonstrated the use of WISE in a number of optimal ways. Teacher confusion with certain tools available within WISE suggests that there may be a way to develop the user experience through these touch points and help teachers learn how to use the technology as they are selecting and setting up a project run. Further research may study whether improving these touch points can improve the teachers' use of WISE as intended both basically and optimally. It may also study whether or not teacher in basic and optimal ways directly impact student learning results.

  10. WISE Choices? Understanding Occupational Decision-Making in a Climate of Equal Opportunities for Women in Science and Technology.

    ERIC Educational Resources Information Center

    Henwood, Flis

    1996-01-01

    Examines Women into Science and Engineering (WISE) discourse and explains why WISE has had limited success. It argues the WISE discourse limits the space women have to speak of the conflicts and contradictions they experience, and suggests the need for a greater understanding of how subjective experiences of gender and sexuality impinge upon work…

  11. The Windy Island Soliton Experiment (WISE): Shallow Water and Basin Experiment Configuration and Preliminary Observations

    DTIC Science & Technology

    2009-02-19

    Virginia 22203-1995 The Windy Island Soliton Experiment (WISE): Shallow Water and Basin Experiment Configuration and Preliminary Observations...case letters) The Windy Island Soliton Experiment (WISE): Shallow water and Basin Experiment Configuration and Preliminary Observations 5. FUNDING...release; distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) The Windy Islands Soliton Experiment (WISE) was

  12. TEAM: efficient two-locus epistasis tests in human genome-wide association study.

    PubMed

    Zhang, Xiang; Huang, Shunping; Zou, Fei; Wang, Wei

    2010-06-15

    As a promising tool for identifying genetic markers underlying phenotypic differences, genome-wide association study (GWAS) has been extensively investigated in recent years. In GWAS, detecting epistasis (or gene-gene interaction) is preferable over single locus study since many diseases are known to be complex traits. A brute force search is infeasible for epistasis detection in the genome-wide scale because of the intensive computational burden. Existing epistasis detection algorithms are designed for dataset consisting of homozygous markers and small sample size. In human study, however, the genotype may be heterozygous, and number of individuals can be up to thousands. Thus, existing methods are not readily applicable to human datasets. In this article, we propose an efficient algorithm, TEAM, which significantly speeds up epistasis detection for human GWAS. Our algorithm is exhaustive, i.e. it does not ignore any epistatic interaction. Utilizing the minimum spanning tree structure, the algorithm incrementally updates the contingency tables for epistatic tests without scanning all individuals. Our algorithm has broader applicability and is more efficient than existing methods for large sample study. It supports any statistical test that is based on contingency tables, and enables both family-wise error rate and false discovery rate controlling. Extensive experiments show that our algorithm only needs to examine a small portion of the individuals to update the contingency tables, and it achieves at least an order of magnitude speed up over the brute force approach.

  13. Fishing for drifts: detecting buoyancy changes of a top marine predator using a step-wise filtering method

    PubMed Central

    Gordine, Samantha Alex; Fedak, Michael; Boehme, Lars

    2015-01-01

    ABSTRACT In southern elephant seals (Mirounga leonina), fasting- and foraging-related fluctuations in body composition are reflected by buoyancy changes. Such buoyancy changes can be monitored by measuring changes in the rate at which a seal drifts passively through the water column, i.e. when all active swimming motion ceases. Here, we present an improved knowledge-based method for detecting buoyancy changes from compressed and abstracted dive profiles received through telemetry. By step-wise filtering of the dive data, the developed algorithm identifies fragments of dives that correspond to times when animals drift. In the dive records of 11 southern elephant seals from South Georgia, this filtering method identified 0.8–2.2% of all dives as drift dives, indicating large individual variation in drift diving behaviour. The obtained drift rate time series exhibit that, at the beginning of each migration, all individuals were strongly negatively buoyant. Over the following 75–150 days, the buoyancy of all individuals peaked close to or at neutral buoyancy, indicative of a seal's foraging success. Independent verification with visually inspected detailed high-resolution dive data confirmed that this method is capable of reliably detecting buoyancy changes in the dive records of drift diving species using abstracted data. This also affirms that abstracted dive profiles convey the geometric shape of drift dives in sufficient detail for them to be identified. Further, it suggests that, using this step-wise filtering method, buoyancy changes could be detected even in old datasets with compressed dive information, for which conventional drift dive classification previously failed. PMID:26486362

  14. Cluster mislocation in kinematic Sunyaev-Zel'dovich (kSZ) effect extraction

    NASA Astrophysics Data System (ADS)

    Calafut, Victoria Rose; Bean, Rachel; Yu, Byeonghee

    2018-01-01

    We investigate the impact of a variety of analysis assumptions that influence cluster identification and location on the kSZ pairwise momentum signal and covariance estimation. Photometric and spectroscopic galaxy tracers from SDSS, WISE, and DECaLs, spanning redshifts 0.05

  15. A method to improve the accuracy of pair-wise combinations of anthropometric elements when only limited data are available.

    PubMed

    Albin, Thomas J

    2013-01-01

    Designers and ergonomists occasionally must produce anthropometric models of workstations with only summary percentile data available regarding the intended users. Until now the only option available was adding or subtracting percentiles of the anthropometric elements, e.g. heights and widths, used in the model, despite the known resultant errors in the estimate of the percent of users accommodated. This paper introduces a new method, the Median Correlation Method (MCM) that reduces the error. Compare the relative accuracy of MCM to combining percentiles for anthropometric models comprised of all possible pairs of five anthropometric elements. Describe the mathematical basis of the greater accuracy of MCM. MCM is described. 95th percentile accommodation percentiles are calculated for the sums and differences of all combinations of five anthropometric elements by combining percentiles and using MCM. The resulting estimates are compared with empirical values of the 95th percentiles, and the relative errors are reported. The MCM method is shown to be significantly more accurate than adding percentiles. MCM is demonstrated to have a mathematical advantage estimating accommodation relative to adding or subtracting percentiles. The MCM method should be used in preference to adding or subtracting percentiles when limited data prevent more sophisticated anthropometric models.

  16. Emotion blocks the path to learning under stereotype threat

    PubMed Central

    Good, Catherine; Whiteman, Ronald C.; Maniscalco, Brian; Dweck, Carol S.

    2012-01-01

    Gender-based stereotypes undermine females’ performance on challenging math tests, but how do they influence their ability to learn from the errors they make? Females under stereotype threat or non-threat were presented with accuracy feedback after each problem on a GRE-like math test, followed by an optional interactive tutorial that provided step-wise problem-solving instruction. Event-related potentials tracked the initial detection of the negative feedback following errors [feedback related negativity (FRN), P3a], as well as any subsequent sustained attention/arousal to that information [late positive potential (LPP)]. Learning was defined as success in applying tutorial information to correction of initial test errors on a surprise retest 24-h later. Under non-threat conditions, emotional responses to negative feedback did not curtail exploration of the tutor, and the amount of tutor exploration predicted learning success. In the stereotype threat condition, however, greater initial salience of the failure (FRN) predicted less exploration of the tutor, and sustained attention to the negative feedback (LPP) predicted poor learning from what was explored. Thus, under stereotype threat, emotional responses to negative feedback predicted both disengagement from learning and interference with learning attempts. We discuss the importance of emotion regulation in successful rebound from failure for stigmatized groups in stereotype-salient environments. PMID:21252312

  17. A feasibility study on estimation of tissue mixture contributions in 3D arterial spin labeling sequence

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Pu, Huangsheng; Zhang, Xi; Li, Baojuan; Liang, Zhengrong; Lu, Hongbing

    2017-03-01

    Arterial spin labeling (ASL) provides a noninvasive measurement of cerebral blood flow (CBF). Due to relatively low spatial resolution, the accuracy of CBF measurement is affected by the partial volume (PV) effect. To obtain accurate CBF estimation, the contribution of each tissue type in the mixture is desirable. In general, this can be obtained according to the registration of ASL and structural image in current ASL studies. This approach can obtain probability of each tissue type inside each voxel, but it also introduces error, which include error of registration algorithm and imaging itself error in scanning of ASL and structural image. Therefore, estimation of mixture percentage directly from ASL data is greatly needed. Under the assumption that ASL signal followed the Gaussian distribution and each tissue type is independent, a maximum a posteriori expectation-maximization (MAP-EM) approach was formulated to estimate the contribution of each tissue type to the observed perfusion signal at each voxel. Considering the sensitivity of MAP-EM to the initialization, an approximately accurate initialization was obtain using 3D Fuzzy c-means method. Our preliminary results demonstrated that the GM and WM pattern across the perfusion image can be sufficiently visualized by the voxel-wise tissue mixtures, which may be promising for the diagnosis of various brain diseases.

  18. Emotion blocks the path to learning under stereotype threat.

    PubMed

    Mangels, Jennifer A; Good, Catherine; Whiteman, Ronald C; Maniscalco, Brian; Dweck, Carol S

    2012-02-01

    Gender-based stereotypes undermine females' performance on challenging math tests, but how do they influence their ability to learn from the errors they make? Females under stereotype threat or non-threat were presented with accuracy feedback after each problem on a GRE-like math test, followed by an optional interactive tutorial that provided step-wise problem-solving instruction. Event-related potentials tracked the initial detection of the negative feedback following errors [feedback related negativity (FRN), P3a], as well as any subsequent sustained attention/arousal to that information [late positive potential (LPP)]. Learning was defined as success in applying tutorial information to correction of initial test errors on a surprise retest 24-h later. Under non-threat conditions, emotional responses to negative feedback did not curtail exploration of the tutor, and the amount of tutor exploration predicted learning success. In the stereotype threat condition, however, greater initial salience of the failure (FRN) predicted less exploration of the tutor, and sustained attention to the negative feedback (LPP) predicted poor learning from what was explored. Thus, under stereotype threat, emotional responses to negative feedback predicted both disengagement from learning and interference with learning attempts. We discuss the importance of emotion regulation in successful rebound from failure for stigmatized groups in stereotype-salient environments.

  19. Effects of acute aerobic exercise on neural correlates of attention and inhibition in adolescents with bipolar disorder

    PubMed Central

    Metcalfe, A W S; MacIntosh, B J; Scavone, A; Ou, X; Korczak, D; Goldstein, B I

    2016-01-01

    Executive dysfunction is common during and between mood episodes in bipolar disorder (BD), causing social and functional impairment. This study investigated the effect of acute exercise on adolescents with BD and healthy control subjects (HC) to test for positive or negative consequences on neural response during an executive task. Fifty adolescents (mean age 16.54±1.47 years, 56% female, 30 with BD) completed an attention and response inhibition task before and after 20 min of recumbent cycling at ~70% of age-predicted maximum heart rate. 3 T functional magnetic resonance imaging data were analyzed in a whole brain voxel-wise analysis and as regions of interest (ROI), examining Go and NoGo response events. In the whole brain analysis of Go trials, exercise had larger effect in BD vs HC throughout ventral prefrontal cortex, amygdala and hippocampus; the profile of these effects was of greater disengagement after exercise. Pre-exercise ROI analysis confirmed this 'deficit in deactivation' for BDs in rostral ACC and found an activation deficit on NoGo errors in accumbens. Pre-exercise accumbens NoGo error activity correlated with depression symptoms and Go activity with mania symptoms; no correlations were present after exercise. Performance was matched to controls and results survived a series of covariate analyses. This study provides evidence that acute aerobic exercise transiently changes neural response during an executive task among adolescents with BD, and that pre-exercise relationships between symptoms and neural response are absent after exercise. Acute aerobic exercise constitutes a biological probe that may provide insights regarding pathophysiology and treatment of BD. PMID:27187236

  20. Effects of acute aerobic exercise on neural correlates of attention and inhibition in adolescents with bipolar disorder.

    PubMed

    Metcalfe, A W S; MacIntosh, B J; Scavone, A; Ou, X; Korczak, D; Goldstein, B I

    2016-05-17

    Executive dysfunction is common during and between mood episodes in bipolar disorder (BD), causing social and functional impairment. This study investigated the effect of acute exercise on adolescents with BD and healthy control subjects (HC) to test for positive or negative consequences on neural response during an executive task. Fifty adolescents (mean age 16.54±1.47 years, 56% female, 30 with BD) completed an attention and response inhibition task before and after 20 min of recumbent cycling at ~70% of age-predicted maximum heart rate. 3 T functional magnetic resonance imaging data were analyzed in a whole brain voxel-wise analysis and as regions of interest (ROI), examining Go and NoGo response events. In the whole brain analysis of Go trials, exercise had larger effect in BD vs HC throughout ventral prefrontal cortex, amygdala and hippocampus; the profile of these effects was of greater disengagement after exercise. Pre-exercise ROI analysis confirmed this 'deficit in deactivation' for BDs in rostral ACC and found an activation deficit on NoGo errors in accumbens. Pre-exercise accumbens NoGo error activity correlated with depression symptoms and Go activity with mania symptoms; no correlations were present after exercise. Performance was matched to controls and results survived a series of covariate analyses. This study provides evidence that acute aerobic exercise transiently changes neural response during an executive task among adolescents with BD, and that pre-exercise relationships between symptoms and neural response are absent after exercise. Acute aerobic exercise constitutes a biological probe that may provide insights regarding pathophysiology and treatment of BD.

  1. Space-Wise approach for airborne gravity data modelling

    NASA Astrophysics Data System (ADS)

    Sampietro, D.; Capponi, M.; Mansi, A. H.; Gatti, A.; Marchetti, P.; Sansò, F.

    2017-05-01

    Regional gravity field modelling by means of remove-compute-restore procedure is nowadays widely applied in different contexts: it is the most used technique for regional gravimetric geoid determination, and it is also used in exploration geophysics to predict grids of gravity anomalies (Bouguer, free-air, isostatic, etc.), which are useful to understand and map geological structures in a specific region. Considering this last application, due to the required accuracy and resolution, airborne gravity observations are usually adopted. However, due to the relatively high acquisition velocity, presence of atmospheric turbulence, aircraft vibration, instrumental drift, etc., airborne data are usually contaminated by a very high observation error. For this reason, a proper procedure to filter the raw observations in both the low and high frequencies should be applied to recover valuable information. In this work, a software to filter and grid raw airborne observations is presented: the proposed solution consists in a combination of an along-track Wiener filter and a classical Least Squares Collocation technique. Basically, the proposed procedure is an adaptation to airborne gravimetry of the Space-Wise approach, developed by Politecnico di Milano to process data coming from the ESA satellite mission GOCE. Among the main differences with respect to the satellite application of this approach, there is the fact that, while in processing GOCE data the stochastic characteristics of the observation error can be considered a-priori well known, in airborne gravimetry, due to the complex environment in which the observations are acquired, these characteristics are unknown and should be retrieved from the dataset itself. The presented solution is suited for airborne data analysis in order to be able to quickly filter and grid gravity observations in an easy way. Some innovative theoretical aspects focusing in particular on the theoretical covariance modelling are presented too. In the end, the goodness of the procedure is evaluated by means of a test on real data retrieving the gravitational signal with a predicted accuracy of about 0.4 mGal.

  2. Approximate Bayesian Computation by Subset Simulation using hierarchical state-space models

    NASA Astrophysics Data System (ADS)

    Vakilzadeh, Majid K.; Huang, Yong; Beck, James L.; Abrahamsson, Thomas

    2017-02-01

    A new multi-level Markov Chain Monte Carlo algorithm for Approximate Bayesian Computation, ABC-SubSim, has recently appeared that exploits the Subset Simulation method for efficient rare-event simulation. ABC-SubSim adaptively creates a nested decreasing sequence of data-approximating regions in the output space that correspond to increasingly closer approximations of the observed output vector in this output space. At each level, multiple samples of the model parameter vector are generated by a component-wise Metropolis algorithm so that the predicted output corresponding to each parameter value falls in the current data-approximating region. Theoretically, if continued to the limit, the sequence of data-approximating regions would converge on to the observed output vector and the approximate posterior distributions, which are conditional on the data-approximation region, would become exact, but this is not practically feasible. In this paper we study the performance of the ABC-SubSim algorithm for Bayesian updating of the parameters of dynamical systems using a general hierarchical state-space model. We note that the ABC methodology gives an approximate posterior distribution that actually corresponds to an exact posterior where a uniformly distributed combined measurement and modeling error is added. We also note that ABC algorithms have a problem with learning the uncertain error variances in a stochastic state-space model and so we treat them as nuisance parameters and analytically integrate them out of the posterior distribution. In addition, the statistical efficiency of the original ABC-SubSim algorithm is improved by developing a novel strategy to regulate the proposal variance for the component-wise Metropolis algorithm at each level. We demonstrate that Self-regulated ABC-SubSim is well suited for Bayesian system identification by first applying it successfully to model updating of a two degree-of-freedom linear structure for three cases: globally, locally and un-identifiable model classes, and then to model updating of a two degree-of-freedom nonlinear structure with Duffing nonlinearities in its interstory force-deflection relationship.

  3. The WISE Satellite Development: Managing the Risks and the Opportunities

    NASA Technical Reports Server (NTRS)

    Duval, Valerie G.; Elwell, John D.; Howard, Joan F.; Irace, William R.; Liu, Feng-Chuan

    2010-01-01

    NASA's Wide-field Infrared Survey Explorer (WISE) MIDEX mission is surveying the entire sky in four infrared bands from 3.4 to 22 micrometers. The WISE instrument consists of a 40 cm telescope, a solid hydrogen cryostat, a scan mirror mechanism, and four 1K x1K infrared detectors. The WISE spacecraft bus provides communication, data handling, and avionics including instrument pointing. A Delta 7920 successfully launched WISE into a Sun-synchronous polar orbit on December 14, 2009. WISE was competitively selected by NASA as a Medium cost Explorer mission (MIDEX) in 2002. MIDEX missions are led by the Principal Investigator who delegates day-to-day management to the Project Manager. Given the tight cost cap and relatively short development schedule, NASA chose to extend the development period one year with an option to cancel the mission if certain criteria were not met. To meet this and other challenges, the WISE management team had to learn to work seamlessly across institutional lines and to recognize risks and opportunities in order to develop the flight hardware within the project resources. In spite of significant technical issues, the WISE satellite was delivered on budget and on schedule. This paper describes our management approach and risk posture, technical issues, and critical decisions made.

  4. Inhibition of Wnt signaling by Wise (Sostdc1) and negative feedback from Shh controls tooth number and patterning.

    PubMed

    Ahn, Youngwook; Sanderson, Brian W; Klein, Ophir D; Krumlauf, Robb

    2010-10-01

    Mice carrying mutations in Wise (Sostdc1) display defects in many aspects of tooth development, including tooth number, size and cusp pattern. To understand the basis of these defects, we have investigated the pathways modulated by Wise in tooth development. We present evidence that, in tooth development, Wise suppresses survival of the diastema or incisor vestigial buds by serving as an inhibitor of Lrp5- and Lrp6-dependent Wnt signaling. Reducing the dosage of the Wnt co-receptor genes Lrp5 and Lrp6 rescues the Wise-null tooth phenotypes. Inactivation of Wise leads to elevated Wnt signaling and, as a consequence, vestigial tooth buds in the normally toothless diastema region display increased proliferation and continuous development to form supernumerary teeth. Conversely, gain-of-function studies show that ectopic Wise reduces Wnt signaling and tooth number. Our analyses demonstrate that the Fgf and Shh pathways are major downstream targets of Wise-regulated Wnt signaling. Furthermore, our experiments revealed that Shh acts as a negative-feedback regulator of Wnt signaling and thus determines the fate of the vestigial buds and later tooth patterning. These data provide insight into the mechanisms that control Wnt signaling in tooth development and into how crosstalk among signaling pathways controls tooth number and morphogenesis.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, Rahul I.; Metchev, Stanimir A.; Trollo, Joseph

    In an earlier study, we reported nearly 100 previously unknown dusty debris disks around Hipparcos main-sequence stars within 75 pc by selecting stars with excesses in individual WISE colors. Here, we further scrutinize the Hipparcos  75 pc sample to (1) gain sensitivity to previously undetected, fainter mid-IR excesses and (2) remove spurious excesses contaminated by previously unidentified blended sources. We improve on our previous method by adopting a more accurate measure of the confidence threshold for excess detection and by adding an optimally weighted color average that incorporates all shorter-wavelength WISE photometry, rather than using only individual WISE colors. Themore » latter is equivalent to spectral energy distribution fitting, but only over WISE bandpasses. In addition, we leverage the higher-resolution WISE images available through the unWISE.me image service to identify contaminated WISE excesses based on photocenter offsets among the W 3- and W 4-band images. Altogether, we identify 19 previously unreported candidate debris disks. Combined with the results from our earlier study, we have found a total of 107 new debris disks around 75 pc Hipparcos main-sequence stars using precisely calibrated WISE photometry. This expands the 75 pc debris disk sample by 22% around Hipparcos main-sequence stars and by 20% overall (including non-main-sequence and non- Hipparcos stars).« less

  6. Cluster mislocation in kinematic Sunyaev-Zel'dovich effect extraction

    NASA Astrophysics Data System (ADS)

    Calafut, Victoria; Bean, Rachel; Yu, Byeonghee

    2017-12-01

    We investigate the impact of a variety of analysis assumptions that influence cluster identification and location on the kinematic Sunyaev-Zel'dovich (kSZ) pairwise momentum signal and covariance estimation. Photometric and spectroscopic galaxy tracers from SDSS, WISE, and DECaLs, spanning redshifts 0.05

  7. WISE Observations of Comets, Centaurs, & Scattered Disk Objects

    NASA Technical Reports Server (NTRS)

    Bauer, J.; Walker, R.; Mainzer, A.; Masiero, J.; Grav, T.; Cutri, R.; Dailey, J.; McMillan, R.; Lisse, C. M.; Fernandez, Y. R.; hide

    2011-01-01

    The Wide-Field Infrared Survey Explorer (WISE) was luanched on December 14, 2009. WISE imaged more than 99% of the sky in the mid-infrared for a 9-month mission lifetome. In addition to its primary goals of detecting the most luminous infrared galaxies and the nearest brown dwarfs, WISE, detected over 155500 of solar system bodies, 33700 of which were previously unknown. Most of the new objects were main Belt asteriods, and particular emphasis was on the discovery of Near Earth Asteoids. Hundreds of Jupiter Trojans have been imaged by WISE as well. However a substantial number of Centaurs, Scattered Disc Objects (SDOs), & cometary objects, were observed and discovered.

  8. Educational Materials - Burn Wise

    EPA Pesticide Factsheets

    Burn Wise outreach material. Burn Wise is a partnership program of that emphasizes the importance of burning the right wood, the right way, in the right wood-burning appliance to protect your home, health, and the air we breathe.

  9. Evaluation of the SunWise School Program

    ERIC Educational Resources Information Center

    Geller, Alan; Rutsch, Linda; Kenausis, Kristin; Zhang, Zi

    2003-01-01

    Melanoma, the most fatal form of skin cancer, is rising at rates faster than all other preventable cancers in the United States. Childhood exposure to ultraviolet (UV) light increases the risk for skin cancer as an adult, therefore, starting positive sun protection habits early may be key to reducing the incidence of this disease. The…

  10. Application of Markov Models for Analysis of Development of Psychological Characteristics

    ERIC Educational Resources Information Center

    Kuravsky, Lev S.; Malykh, Sergey B.

    2004-01-01

    A technique to study combined influence of environmental and genetic factors on the base of changes in phenotype distributions is presented. Histograms are exploited as base analyzed characteristics. A continuous time, discrete state Markov process with piece-wise constant interstate transition rates is associated with evolution of each histogram.…

  11. Schools and Districts Use Resources Wisely to Increase Achievement and Graduate More Students.

    ERIC Educational Resources Information Center

    Southern Regional Education Board (SREB), 2011

    2011-01-01

    In a time of reduced funding, schools are meeting the challenge to continue improving classroom practices, student achievement and graduation rates. Many schools and teachers are forming networks to exchange information via the Internet as they tap into free electronic resources. Career/technical (CT) instructors are teaching students about…

  12. The Rankings Game: Who's Playing Whom?

    ERIC Educational Resources Information Center

    Burness, John F.

    2008-01-01

    This summer, Forbes magazine published its new rankings of "America's Best Colleges," implying that it had developed a methodology that would give the public the information that it needed to choose a college wisely. "U.S. News & World Report," which in 1983 published the first annual ranking, just announced its latest ratings last week--including…

  13. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo

    1986-01-01

    A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  14. Burn Wise Educational Materials for Businesses

    EPA Pesticide Factsheets

    Burn Wise outreach material. Burn Wise is a partnership program of that emphasizes the importance of burning the right wood, the right way, in the right wood-burning appliance to protect your home, health, and the air we breathe.

  15. Verifying and Postprocesing the Ensemble Spread-Error Relationship

    NASA Astrophysics Data System (ADS)

    Hopson, Tom; Knievel, Jason; Liu, Yubao; Roux, Gregory; Wu, Wanli

    2013-04-01

    With the increased utilization of ensemble forecasts in weather and hydrologic applications, there is a need to verify their benefit over less expensive deterministic forecasts. One such potential benefit of ensemble systems is their capacity to forecast their own forecast error through the ensemble spread-error relationship. The paper begins by revisiting the limitations of the Pearson correlation alone in assessing this relationship. Next, we introduce two new metrics to consider in assessing the utility an ensemble's varying dispersion. We argue there are two aspects of an ensemble's dispersion that should be assessed. First, and perhaps more fundamentally: is there enough variability in the ensembles dispersion to justify the maintenance of an expensive ensemble prediction system (EPS), irrespective of whether the EPS is well-calibrated or not? To diagnose this, the factor that controls the theoretical upper limit of the spread-error correlation can be useful. Secondly, does the variable dispersion of an ensemble relate to variable expectation of forecast error? Representing the spread-error correlation in relation to its theoretical limit can provide a simple diagnostic of this attribute. A context for these concepts is provided by assessing two operational ensembles: 30-member Western US temperature forecasts for the U.S. Army Test and Evaluation Command and 51-member Brahmaputra River flow forecasts of the Climate Forecast and Applications Project for Bangladesh. Both of these systems utilize a postprocessing technique based on quantile regression (QR) under a step-wise forward selection framework leading to ensemble forecasts with both good reliability and sharpness. In addition, the methodology utilizes the ensemble's ability to self-diagnose forecast instability to produce calibrated forecasts with informative skill-spread relationships. We will describe both ensemble systems briefly, review the steps used to calibrate the ensemble forecast, and present verification statistics using error-spread metrics, along with figures from operational ensemble forecasts before and after calibration.

  16. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  17. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  18. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  19. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  20. An educational and audit tool to reduce prescribing error in intensive care.

    PubMed

    Thomas, A N; Boxall, E M; Laha, S K; Day, A J; Grundy, D

    2008-10-01

    To reduce prescribing errors in an intensive care unit by providing prescriber education in tutorials, ward-based teaching and feedback in 3-monthly cycles with each new group of trainee medical staff. Prescribing audits were conducted three times in each 3-month cycle, once pretraining, once post-training and a final audit after 6 weeks. The audit information was fed back to prescribers with their correct prescribing rates, rates for individual error types and total error rates together with anonymised information about other prescribers' error rates. The percentage of prescriptions with errors decreased over each 3-month cycle (pretraining 25%, 19%, (one missing data point), post-training 23%, 6%, 11%, final audit 7%, 3%, 5% (p<0.0005)). The total number of prescriptions and error rates varied widely between trainees (data collection one; cycle two: range of prescriptions written: 1-61, median 18; error rate: 0-100%; median: 15%). Prescriber education and feedback reduce manual prescribing errors in intensive care.

  1. Improving Visualization and Interpretation of Metabolome-Wide Association Studies: An Application in a Population-Based Cohort Using Untargeted 1H NMR Metabolic Profiling.

    PubMed

    Castagné, Raphaële; Boulangé, Claire Laurence; Karaman, Ibrahim; Campanella, Gianluca; Santos Ferreira, Diana L; Kaluarachchi, Manuja R; Lehne, Benjamin; Moayyeri, Alireza; Lewis, Matthew R; Spagou, Konstantina; Dona, Anthony C; Evangelos, Vangelis; Tracy, Russell; Greenland, Philip; Lindon, John C; Herrington, David; Ebbels, Timothy M D; Elliott, Paul; Tzoulaki, Ioanna; Chadeau-Hyam, Marc

    2017-10-06

    1 H NMR spectroscopy of biofluids generates reproducible data allowing detection and quantification of small molecules in large population cohorts. Statistical models to analyze such data are now well-established, and the use of univariate metabolome wide association studies (MWAS) investigating the spectral features separately has emerged as a computationally efficient and interpretable alternative to multivariate models. The MWAS rely on the accurate estimation of a metabolome wide significance level (MWSL) to be applied to control the family wise error rate. Subsequent interpretation requires efficient visualization and formal feature annotation, which, in-turn, call for efficient prioritization of spectral variables of interest. Using human serum 1 H NMR spectroscopic profiles from 3948 participants from the Multi-Ethnic Study of Atherosclerosis (MESA), we have performed a series of MWAS for serum levels of glucose. We first propose an extension of the conventional MWSL that yields stable estimates of the MWSL across the different model parameterizations and distributional features of the outcome. We propose both efficient visualization methods and a strategy based on subsampling and internal validation to prioritize the associations. Our work proposes and illustrates practical and scalable solutions to facilitate the implementation of the MWAS approach and improve interpretation in large cohort studies.

  2. Improving Visualization and Interpretation of Metabolome-Wide Association Studies: An Application in a Population-Based Cohort Using Untargeted 1H NMR Metabolic Profiling

    PubMed Central

    2017-01-01

    1H NMR spectroscopy of biofluids generates reproducible data allowing detection and quantification of small molecules in large population cohorts. Statistical models to analyze such data are now well-established, and the use of univariate metabolome wide association studies (MWAS) investigating the spectral features separately has emerged as a computationally efficient and interpretable alternative to multivariate models. The MWAS rely on the accurate estimation of a metabolome wide significance level (MWSL) to be applied to control the family wise error rate. Subsequent interpretation requires efficient visualization and formal feature annotation, which, in-turn, call for efficient prioritization of spectral variables of interest. Using human serum 1H NMR spectroscopic profiles from 3948 participants from the Multi-Ethnic Study of Atherosclerosis (MESA), we have performed a series of MWAS for serum levels of glucose. We first propose an extension of the conventional MWSL that yields stable estimates of the MWSL across the different model parameterizations and distributional features of the outcome. We propose both efficient visualization methods and a strategy based on subsampling and internal validation to prioritize the associations. Our work proposes and illustrates practical and scalable solutions to facilitate the implementation of the MWAS approach and improve interpretation in large cohort studies. PMID:28823158

  3. Electroconvulsive therapy reduces frontal cortical connectivity in severe depressive disorder

    PubMed Central

    Perrin, Jennifer S.; Merz, Susanne; Bennett, Daniel M.; Currie, James; Steele, Douglas J.; Reid, Ian C.; Schwarzbauer, Christian

    2012-01-01

    To date, electroconvulsive therapy (ECT) is the most potent treatment in severe depression. Although ECT has been successfully applied in clinical practice for over 70 years, the underlying mechanisms of action remain unclear. We used functional MRI and a unique data-driven analysis approach to examine functional connectivity in the brain before and after ECT treatment. Our results show that ECT has lasting effects on the functional architecture of the brain. A comparison of pre- and posttreatment functional connectivity data in a group of nine patients revealed a significant cluster of voxels in and around the left dorsolateral prefrontal cortical region (Brodmann areas 44, 45, and 46), where the average global functional connectivity was considerably decreased after ECT treatment (P < 0.05, family-wise error-corrected). This decrease in functional connectivity was accompanied by a significant improvement (P < 0.001) in depressive symptoms; the patients’ mean scores on the Montgomery Asberg Depression Rating Scale pre- and posttreatment were 36.4 (SD = 4.9) and 10.7 (SD = 9.6), respectively. The findings reported here add weight to the emerging “hyperconnectivity hypothesis” in depression and support the proposal that increased connectivity may constitute both a biomarker for mood disorder and a potential therapeutic target. PMID:22431642

  4. FGWAS: Functional genome wide association analysis.

    PubMed

    Huang, Chao; Thompson, Paul; Wang, Yalin; Yu, Yang; Zhang, Jingwen; Kong, Dehan; Colen, Rivka R; Knickmeyer, Rebecca C; Zhu, Hongtu

    2017-10-01

    Functional phenotypes (e.g., subcortical surface representation), which commonly arise in imaging genetic studies, have been used to detect putative genes for complexly inherited neuropsychiatric and neurodegenerative disorders. However, existing statistical methods largely ignore the functional features (e.g., functional smoothness and correlation). The aim of this paper is to develop a functional genome-wide association analysis (FGWAS) framework to efficiently carry out whole-genome analyses of functional phenotypes. FGWAS consists of three components: a multivariate varying coefficient model, a global sure independence screening procedure, and a test procedure. Compared with the standard multivariate regression model, the multivariate varying coefficient model explicitly models the functional features of functional phenotypes through the integration of smooth coefficient functions and functional principal component analysis. Statistically, compared with existing methods for genome-wide association studies (GWAS), FGWAS can substantially boost the detection power for discovering important genetic variants influencing brain structure and function. Simulation studies show that FGWAS outperforms existing GWAS methods for searching sparse signals in an extremely large search space, while controlling for the family-wise error rate. We have successfully applied FGWAS to large-scale analysis of data from the Alzheimer's Disease Neuroimaging Initiative for 708 subjects, 30,000 vertices on the left and right hippocampal surfaces, and 501,584 SNPs. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.

    PubMed

    Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema

    2016-01-01

    A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.

  6. Tibiofemoral contact forces during walking, running and sidestepping.

    PubMed

    Saxby, David J; Modenese, Luca; Bryant, Adam L; Gerus, Pauline; Killen, Bryce; Fortin, Karine; Wrigley, Tim V; Bennell, Kim L; Cicuttini, Flavia M; Lloyd, David G

    2016-09-01

    We explored the tibiofemoral contact forces and the relative contributions of muscles and external loads to those contact forces during various gait tasks. Second, we assessed the relationships between external gait measures and contact forces. A calibrated electromyography-driven neuromusculoskeletal model estimated the tibiofemoral contact forces during walking (1.44±0.22ms(-1)), running (4.38±0.42ms(-1)) and sidestepping (3.58±0.50ms(-1)) in healthy adults (n=60, 27.3±5.4years, 1.75±0.11m, and 69.8±14.0kg). Contact forces increased from walking (∼1-2.8 BW) to running (∼3-8 BW), sidestepping had largest maximum total (8.47±1.57 BW) and lateral contact forces (4.3±1.05 BW), while running had largest maximum medial contact forces (5.1±0.95 BW). Relative muscle contributions increased across gait tasks (up to 80-90% of medial contact forces), and peaked during running for lateral contact forces (∼90%). Knee adduction moment (KAM) had weak relationships with tibiofemoral contact forces (all R(2)<0.36) and the relationships were gait task-specific. Step-wise regression of multiple external gait measures strengthened relationships (0.20

  7. Data Analysis & Statistical Methods for Command File Errors

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  8. Discrete conservation properties for shallow water flows using mixed mimetic spectral elements

    NASA Astrophysics Data System (ADS)

    Lee, D.; Palha, A.; Gerritsma, M.

    2018-03-01

    A mixed mimetic spectral element method is applied to solve the rotating shallow water equations. The mixed method uses the recently developed spectral element histopolation functions, which exactly satisfy the fundamental theorem of calculus with respect to the standard Lagrange basis functions in one dimension. These are used to construct tensor product solution spaces which satisfy the generalized Stokes theorem, as well as the annihilation of the gradient operator by the curl and the curl by the divergence. This allows for the exact conservation of first order moments (mass, vorticity), as well as higher moments (energy, potential enstrophy), subject to the truncation error of the time stepping scheme. The continuity equation is solved in the strong form, such that mass conservation holds point wise, while the momentum equation is solved in the weak form such that vorticity is globally conserved. While mass, vorticity and energy conservation hold for any quadrature rule, potential enstrophy conservation is dependent on exact spatial integration. The method possesses a weak form statement of geostrophic balance due to the compatible nature of the solution spaces and arbitrarily high order spatial error convergence.

  9. Structure based alignment and clustering of proteins (STRALCP)

    DOEpatents

    Zemla, Adam T.; Zhou, Carol E.; Smith, Jason R.; Lam, Marisa W.

    2013-06-18

    Disclosed are computational methods of clustering a set of protein structures based on local and pair-wise global similarity values. Pair-wise local and global similarity values are generated based on pair-wise structural alignments for each protein in the set of protein structures. Initially, the protein structures are clustered based on pair-wise local similarity values. The protein structures are then clustered based on pair-wise global similarity values. For each given cluster both a representative structure and spans of conserved residues are identified. The representative protein structure is used to assign newly-solved protein structures to a group. The spans are used to characterize conservation and assign a "structural footprint" to the cluster.

  10. Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals

    NASA Astrophysics Data System (ADS)

    Goswami, S.; Flury, J.

    2016-12-01

    In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.

  11. Soot Volume Fraction Imaging

    NASA Technical Reports Server (NTRS)

    Greenberg, Paul S.; Ku, Jerry C.

    1994-01-01

    A new technique is described for the full-field determination of soot volume fractions via laser extinction measurements. This technique differs from previously reported point-wise methods in that a two-dimensional array (i.e., image) of data is acquired simultaneously. In this fashion, the net data rate is increased, allowing the study of time-dependent phenomena and the investigation of spatial and temporal correlations. A telecentric imaging configuration is employed to provide depth-invariant magnification and to permit the specification of the collection angle for scattered light. To improve the threshold measurement sensitivity, a method is employed to suppress undesirable coherent imaging effects. A discussion of the tomographic inversion process is provided, including the results obtained from numerical simulation. Results obtained with this method from an ethylene diffusion flame are shown to be in close agreement with those previously obtained by sequential point-wise interrogation.

  12. 3-methylcyclohexanone thiosemicarbazone: determination of E/Z isomerization barrier by dynamic high-performance liquid chromatography, configuration assignment and theoretical study of the mechanisms involved by the spontaneous, acid and base catalyzed processes.

    PubMed

    Carradori, Simone; Cirilli, Roberto; Dei Cicchi, Simona; Ferretti, Rosella; Menta, Sergio; Pierini, Marco; Secci, Daniela

    2012-12-21

    Here, we report on the simultaneous direct HPLC diastereo- and enantioseparation of 3-methylcyclohexanone thiosemicarbazone (3-MCET) on a polysaccharide-based chiral stationary phase under normal-phase conditions. The optimized chromatographic system was employed in dynamic HPLC experiments (DHPLC), as well as detection technique in a batch wise approach to determine the rate constants and the corresponding free energy activation barriers of the spontaneous, base- and acid-promoted E/Z diastereomerization of 3-MCET. The stereochemical characterization of four stereoisomers of 3-MCET was fully accomplished by integrating the results obtained by chemical correlation method with those derived by theoretical calculations and experimental investigations of circular dichroism (CD). As a final goal, a deepened analysis of the perturbing effect exercised by the stationary phase on rate constant values measured through DHPLC determinations as a function of the chromatographic separation factor α of the interconverting species was successfully accomplished. This revealed quite small deviations from the equivalent kinetic values obtained by off-column batch wise procedure, and suggested a possible effective correction of rate constants measured by DHPLC approach. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Topological Vulnerability Analysis

    NASA Astrophysics Data System (ADS)

    Jajodia, Sushil; Noel, Steven

    Traditionally, network administrators rely on labor-intensive processes for tracking network configurations and vulnerabilities. This requires a great deal of expertise, and is error prone because of the complexity of networks and associated security data. The interdependencies of network vulnerabilities make traditional point-wise vulnerability analysis inadequate. We describe a Topological Vulnerability Analysis (TVA) approach that analyzes vulnerability dependencies and shows all possible attack paths into a network. From models of the network vulnerabilities and potential attacker exploits, we compute attack graphs that convey the impact of individual and combined vulnerabilities on overall security. TVA finds potential paths of vulnerability through a network, showing exactly how attackers may penetrate a network. From this, we identify key vulnerabilities and provide strategies for protection of critical network assets.

  14. Under-Five Mortality in High Focus States in India: A District Level Geospatial Analysis

    PubMed Central

    Kumar, Chandan; Singh, Prashant Kumar; Rai, Rajesh Kumar

    2012-01-01

    Background This paper examines if, when controlling for biophysical and geographical variables (including rainfall, productivity of agricultural lands, topography/temperature, and market access through road networks), socioeconomic and health care indicators help to explain variations in the under-five mortality rate across districts from nine high focus states in India. The literature on this subject is inconclusive because the survey data, upon which most studies of child mortality rely, rarely include variables that measure these factors. This paper introduces these variables into an analysis of 284 districts from nine high focus states in India. Methodology/Principal Findings Information on the mortality indicator was accessed from the recently conducted Annual Health Survey of 2011 and other socioeconomic and geographic variables from Census 2011, District Level Household and Facility Survey (2007–08), Department of Economics and Statistics Divisions of the concerned states. Displaying high spatial dependence (spatial autocorrelation) in the mortality indicator (outcome variable) and its possible predictors used in the analysis, the paper uses the Spatial-Error Model in an effort to negate or reduce the spatial dependence in model parameters. The results evince that the coverage gap index (a mixed indicator of district wise coverage of reproductive and child health services), female literacy, urbanization, economic status, the number of newborn care provided in Primary Health Centers in the district transpired as significant correlates of under-five mortality in the nine high focus states in India. The study identifies three clusters with high under-five mortality rate including 30 districts, and advocates urgent attention. Conclusion Even after controlling the possible biophysical and geographical variables, the study reveals that the health program initiatives have a major role to play in reducing under-five mortality rate in the high focus states in India. PMID:22629412

  15. Prediction error and somatosensory insula activation in women recovered from anorexia nervosa.

    PubMed

    Frank, Guido K W; Collier, Shaleise; Shott, Megan E; O'Reilly, Randall C

    2016-08-01

    Previous research in patients with anorexia nervosa showed heightened brain response during a taste reward conditioning task and heightened sensitivity to rewarding and punishing stimuli. Here we tested the hypothesis that individuals recovered from anorexia nervosa would also experience greater brain activation during this task as well as higher sensitivity to salient stimuli than controls. Women recovered from restricting-type anorexia nervosa and healthy control women underwent fMRI during application of a prediction error taste reward learning paradigm. Twenty-four women recovered from anorexia nervosa (mean age 30.3 ± 8.1 yr) and 24 control women (mean age 27.4 ± 6.3 yr) took part in this study. The recovered anorexia nervosa group showed greater left posterior insula activation for the prediction error model analysis than the control group (family-wise error- and small volume-corrected p < 0.05). A group × condition analysis found greater posterior insula response in women recovered from anorexia nervosa than controls for unexpected stimulus omission, but not for unexpected receipt. Sensitivity to punishment was elevated in women recovered from anorexia nervosa. This was a cross-sectional study, and the sample size was modest. Anorexia nervosa after recovery is associated with heightened prediction error-related brain response in the posterior insula as well as greater response to unexpected reward stimulus omission. This finding, together with behaviourally increased sensitivity to punishment, could indicate that individuals recovered from anorexia nervosa are particularly responsive to punishment. The posterior insula processes somatosensory stimuli, including unexpected bodily states, and greater response could indicate altered perception or integration of unexpected or maybe unwanted bodily feelings. Whether those findings develop during the ill state or whether they are biological traits requires further study.

  16. Systematic error of the Gaia DR1 TGAS parallaxes from data for the red giant clump

    NASA Astrophysics Data System (ADS)

    Gontcharov, G. A.

    2017-08-01

    Based on the Gaia DR1 TGAS parallaxes and photometry from the Tycho-2, Gaia, 2MASS, andWISE catalogues, we have produced a sample of 100 000 clump red giants within 800 pc of the Sun. The systematic variations of the mode of their absolute magnitude as a function of the distance, magnitude, and other parameters have been analyzed. We show that these variations reach 0.7 mag and cannot be explained by variations in the interstellar extinction or intrinsic properties of stars and by selection. The only explanation seems to be a systematic error of the Gaia DR1 TGAS parallax dependent on the square of the observed distance in kpc: 0.18 R 2 mas. Allowance for this error reduces significantly the systematic dependences of the absolute magnitude mode on all parameters. This error reaches 0.1 mas within 800 pc of the Sun and allows an upper limit for the accuracy of the TGAS parallaxes to be estimated as 0.2 mas. A careful allowance for such errors is needed to use clump red giants as "standard candles." This eliminates all discrepancies between the theoretical and empirical estimates of the characteristics of these stars and allows us to obtain the first estimates of the modes of their absolute magnitudes from the Gaia parallaxes: mode( M H ) = -1.49 m ± 0.04 m , mode( M Ks ) = -1.63 m ± 0.03 m , mode( M W1) = -1.67 m ± 0.05 m mode( M W2) = -1.67 m ± 0.05 m , mode( M W3) = -1.66 m ± 0.02 m , mode( M W4) = -1.73 m ± 0.03 m , as well as the corresponding estimates of their de-reddened colors.

  17. Institutionalizing Sex Education in Diverse U.S. School Districts.

    PubMed

    Saul Butler, Rebekah; Sorace, Danene; Hentz Beach, Kathleen

    2018-02-01

    This paper describes the Working to Institutionalize Sex Education (WISE) Initiative, a privately funded effort to support ready public school districts to advance and sustain comprehensive sexuality programs, and examines the degree to which WISE has been successful in increasing access to sex education, removing barriers, and highlighting best practices. The data for this study come from a set of performance indicators, guidance documents, and tools designed for the WISE Initiative to capture changes in sex education institutionalization at WISE school districts. The evaluation includes the analysis of 186 school districts across 12 states in the U.S. As a result of the WISE Initiative, 788,865 unique students received new or enhanced sex education in school classrooms and 88 school districts reached their sex education institutionalization goals. In addition to these school district successes, WISE codified the WISE Method and toolkit-a practical guide to help schools implement sex education. Barriers to implementing sexuality education can be overcome with administrative support and focused technical assistance and training, resulting in significant student reach in diverse school districts nationwide. Copyright © 2017 The Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  18. Choosing Wisely: the American College of Rheumatology's Top 5 for pediatric rheumatology.

    PubMed

    Rouster-Stevens, Kelly A; Ardoin, Stacy P; Cooper, Ashley M; Becker, Mara L; Dragone, Leonard L; Huttenlocher, Anna; Jones, Karla B; Kolba, Karen S; Moorthy, L Nandini; Nigrovic, Peter A; Stinson, Jennifer N; Ferguson, Polly J

    2014-05-01

    To create a pediatric rheumatology Top 5 list as part of the American Board of Internal Medicine Foundation's Choosing Wisely campaign. Delphi surveys of a core group of representative pediatric rheumatology providers from across North America generated candidate Top 5 items. Items with high content agreement and perceived to be of prevalent use and of high impact were included in a survey of all American College of Rheumatology (ACR) members who identified themselves as providing care to pediatric patients. Items with the highest ratings were subjected to literature review and further evaluation. A total of 121 candidate items were proposed in the initial Delphi survey and were reduced to 28 items in subsequent surveys. These 28 items were sent to 1,198 rheumatology providers who care for pediatric patients, and 397 (33%) responded. Based upon survey data and literature review, the Top 5 items were identified. These items focused on testing for antinuclear antibodies, autoantibody panels, Lyme disease, methotrexate toxicity monitoring, and use of routine radiographs. The ACR pediatric rheumatology Top 5 is one of the first pediatric subspecialty-specific Choosing Wisely Top 5 lists and provides an opportunity for patients and providers to discuss appropriate use of health care in pediatric rheumatology. Copyright © 2014 by the American College of Rheumatology.

  19. Attitudes, perceptions and awareness concerning quaternary prevention among family doctors working in the Social Security System, Peru: a cross-sectional descriptive study.

    PubMed

    Cuba Fuentes, María Sofía; Zegarra Zamalloa, Carlos Orlando; Reichert, Sonja; Gill, Dawn

    2016-04-27

    Quaternary Prevention is defined as the action taken to identify patients at risk of overtreatment, to protect them from additional medical treatments, and to suggest interventions that are ethically acceptable. Many countries and organizations have joined in the efforts to practice quaternary prevention. These countries started a campaign called Choosing Wisely that implements recommendations in order to avoid harming patients. To determine the attitudes, perceptions and awareness towards Quaternary Prevention and the practice of “Choosing Wisely Canada Recommendations” among family doctors working in the Social Security System in Peru. A questionnaire was developed after reviewing the literature and contacting experts in the field and was sent by email to all 64 family physicians in the Social Security System (Essalud) in Lima Peru. Responses were received from 40 participants. The response rate was 64%. Approximately 95% reported that they understand the concept of quaternary prevention. Agreement with all the recommendations was 90% or higher. In most of the recommendations the applicability was more than 80%. The most important barriers perceived for the practice of Quaternary Prevention were patients’ expectations (33%). There are positive perceptions towards Quaternary Preventions and Choosing Wisely recommendations in the family doctors of social security in Lima Peru.

  20. The American Academy of Neurology's top five choosing wisely recommendations.

    PubMed

    Langer-Gould, Annette M; Anderson, Wayne E; Armstrong, Melissa J; Cohen, Adam B; Eccher, Matthew A; Iverson, Donald J; Potrebic, Sonja B; Becker, Amanda; Larson, Rod; Gedan, Alicia; Getchius, Thomas S D; Gronseth, Gary S

    2013-09-10

    To discuss the American Academy of Neurology (AAN)'s Top Five Recommendations in the Choosing Wisely campaign promoting high-value neurologic medicine and physician-patient communication. The AAN published its Top Five Recommendations in February 2013 in collaboration with the American Board of Internal Medicine Foundation and Consumer Reports. A Choosing Wisely Working Group of 10 AAN members was formed to oversee the process and craft the evidence-based recommendations. AAN members were solicited for recommendations, the recommendations were sent out for external review, and the Working Group members (article authors) used a modified Delphi process to select their Top Five Recommendations. The Working Group submitted 5 neurologic recommendations to the AAN Practice Committee and Board of Directors; all 5 were approved by both entities in September 2012. Recommendation 1: Don't perform EEGs for headaches. Recommendation 2: Don't perform imaging of the carotid arteries for simple syncope without other neurologic symptoms. Recommendation 3: Don't use opioids or butalbital for treatment of migraine, except as a last resort. Recommendation 4: Don't prescribe interferon-β or glatiramer acetate to patients with disability from progressive, nonrelapsing forms of multiple sclerosis. Recommendation 5: Don't recommend carotid endarterectomy for asymptomatic carotid stenosis unless the complication rate is low (<3%).

  1. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  2. MEASURING THE LUMINOSITY AND VIRIAL BLACK HOLE MASS DEPENDENCE OF QUASAR–GALAXY CLUSTERING AT z ∼ 0.8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krolewski, Alex G.; Eisenstein, Daniel J., E-mail: akrolewski@college.harvard.edu

    2015-04-10

    We study the dependence of quasar clustering on quasar luminosity and black hole mass by measuring the angular overdensity of photometrically selected galaxies imaged by the Wide-field Infrared Survey Explorer (WISE) about z ∼ 0.8 quasars from SDSS. By measuring the quasar–galaxy cross-correlation function and using photometrically selected galaxies, we achieve a higher density of tracer objects and a more sensitive detection of clustering than measurements of the quasar autocorrelation function. We test models of quasar formation and evolution by measuring the luminosity dependence of clustering amplitude. We find a significant overdensity of WISE galaxies about z ∼ 0.8 quasarsmore » at 0.2–6.4 h{sup −1} Mpc in projected comoving separation. We find no appreciable increase in clustering amplitude with quasar luminosity across a decade in luminosity, and a power-law fit between luminosity and clustering amplitude gives an exponent of −0.01 ± 0.06 (1 σ error). We also fail to find a significant relationship between clustering amplitude and black hole mass, although our dynamic range in true mass is suppressed due to the large uncertainties in virial black hole mass estimates. Our results indicate that a small range in host dark matter halo mass maps to a large range in quasar luminosity.« less

  3. Work flow analysis of around-the-clock processing of blood culture samples and integrated MALDI-TOF mass spectrometry analysis for the diagnosis of bloodstream infections.

    PubMed

    Schneiderhan, Wilhelm; Grundt, Alexander; Wörner, Stefan; Findeisen, Peter; Neumaier, Michael

    2013-11-01

    Because sepsis has a high mortality rate, rapid microbiological diagnosis is required to enable efficient therapy. The effectiveness of MALDI-TOF mass spectrometry (MALDI-TOF MS) analysis in reducing turnaround times (TATs) for blood culture (BC) pathogen identification when available in a 24-h hospital setting has not been determined. On the basis of data from a total number of 912 positive BCs collected within 140 consecutive days and work flow analyses of laboratory diagnostics, we evaluated different models to assess the TATs for batch-wise and for immediate response (real-time) MALDI-TOF MS pathogen identification of positive BC results during the night shifts. The results were compared to TATs from routine BC processing and biochemical identification performed during regular working hours. Continuous BC incubation together with batch-wise MALDI-TOF MS analysis enabled significant reductions of up to 58.7 h in the mean TATs for the reporting of the bacterial species. The TAT of batch-wise MALDI-TOF MS analysis was inferior by a mean of 4.9 h when compared to the model of the immediate work flow under ideal conditions with no constraints in staff availability. Together with continuous cultivation of BC, the 24-h availability of MALDI-TOF MS can reduce the TAT for microbial pathogen identification within a routine clinical laboratory setting. Batch-wise testing of positive BC loses a few hours compared to real-time identification but is still far superior to classical BC processing. Larger prospective studies are required to evaluate the contribution of rapid around-the-clock pathogen identification to medical decision-making for septicemic patients.

  4. Star Formation Activity Beyond the Outer Arm. I. WISE -selected Candidate Star-forming Regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izumi, Natsuko; Yasui, Chikako; Saito, Masao

    The outer Galaxy beyond the Outer Arm provides a good opportunity to study star formation in an environment significantly different from that in the solar neighborhood. However, star-forming regions in the outer Galaxy have never been comprehensively studied or cataloged because of the difficulties in detecting them at such large distances. We studied 33 known young star-forming regions associated with 13 molecular clouds at R {sub G} ≥ 13.5 kpc in the outer Galaxy with data from the Wide-field Infrared Survey Explorer ( WISE ) mid-infrared all-sky survey. From their color distribution, we developed a simple identification criterion of star-forming regions inmore » the outer Galaxy with the WISE color. We applied the criterion to all the WISE sources in the molecular clouds in the outer Galaxy at R {sub G} ≥ 13.5 kpc detected with the Five College Radio Astronomy Observatory (FCRAO) {sup 12}CO survey of the outer Galaxy, of which the survey region is 102.°49 ≤  l  ≤ 141.°54, −3.°03 ≤  b  ≤ 5.°41, and successfully identified 711 new candidate star-forming regions in 240 molecular clouds. The large number of samples enables us to perform the statistical study of star formation properties in the outer Galaxy for the first time. This study is crucial to investigate the fundamental star formation properties, including star formation rate, star formation efficiency, and initial mass function, in a primordial environment such as the early phase of the Galaxy formation.« less

  5. VoxelStats: A MATLAB Package for Multi-Modal Voxel-Wise Brain Image Analysis.

    PubMed

    Mathotaarachchi, Sulantha; Wang, Seqian; Shin, Monica; Pascoal, Tharick A; Benedet, Andrea L; Kang, Min Su; Beaudry, Thomas; Fonov, Vladimir S; Gauthier, Serge; Labbe, Aurélie; Rosa-Neto, Pedro

    2016-01-01

    In healthy individuals, behavioral outcomes are highly associated with the variability on brain regional structure or neurochemical phenotypes. Similarly, in the context of neurodegenerative conditions, neuroimaging reveals that cognitive decline is linked to the magnitude of atrophy, neurochemical declines, or concentrations of abnormal protein aggregates across brain regions. However, modeling the effects of multiple regional abnormalities as determinants of cognitive decline at the voxel level remains largely unexplored by multimodal imaging research, given the high computational cost of estimating regression models for every single voxel from various imaging modalities. VoxelStats is a voxel-wise computational framework to overcome these computational limitations and to perform statistical operations on multiple scalar variables and imaging modalities at the voxel level. VoxelStats package has been developed in Matlab(®) and supports imaging formats such as Nifti-1, ANALYZE, and MINC v2. Prebuilt functions in VoxelStats enable the user to perform voxel-wise general and generalized linear models and mixed effect models with multiple volumetric covariates. Importantly, VoxelStats can recognize scalar values or image volumes as response variables and can accommodate volumetric statistical covariates as well as their interaction effects with other variables. Furthermore, this package includes built-in functionality to perform voxel-wise receiver operating characteristic analysis and paired and unpaired group contrast analysis. Validation of VoxelStats was conducted by comparing the linear regression functionality with existing toolboxes such as glim_image and RMINC. The validation results were identical to existing methods and the additional functionality was demonstrated by generating feature case assessments (t-statistics, odds ratio, and true positive rate maps). In summary, VoxelStats expands the current methods for multimodal imaging analysis by allowing the estimation of advanced regional association metrics at the voxel level.

  6. Predicting residue-wise contact orders in proteins by support vector regression.

    PubMed

    Song, Jiangning; Burrage, Kevin

    2006-10-03

    The residue-wise contact order (RWCO) describes the sequence separations between the residues of interest and its contacting residues in a protein sequence. It is a new kind of one-dimensional protein structure that represents the extent of long-range contacts and is considered as a generalization of contact order. Together with secondary structure, accessible surface area, the B factor, and contact number, RWCO provides comprehensive and indispensable important information to reconstructing the protein three-dimensional structure from a set of one-dimensional structural properties. Accurately predicting RWCO values could have many important applications in protein three-dimensional structure prediction and protein folding rate prediction, and give deep insights into protein sequence-structure relationships. We developed a novel approach to predict residue-wise contact order values in proteins based on support vector regression (SVR), starting from primary amino acid sequences. We explored seven different sequence encoding schemes to examine their effects on the prediction performance, including local sequence in the form of PSI-BLAST profiles, local sequence plus amino acid composition, local sequence plus molecular weight, local sequence plus secondary structure predicted by PSIPRED, local sequence plus molecular weight and amino acid composition, local sequence plus molecular weight and predicted secondary structure, and local sequence plus molecular weight, amino acid composition and predicted secondary structure. When using local sequences with multiple sequence alignments in the form of PSI-BLAST profiles, we could predict the RWCO distribution with a Pearson correlation coefficient (CC) between the predicted and observed RWCO values of 0.55, and root mean square error (RMSE) of 0.82, based on a well-defined dataset with 680 protein sequences. Moreover, by incorporating global features such as molecular weight and amino acid composition we could further improve the prediction performance with the CC to 0.57 and an RMSE of 0.79. In addition, combining the predicted secondary structure by PSIPRED was found to significantly improve the prediction performance and could yield the best prediction accuracy with a CC of 0.60 and RMSE of 0.78, which provided at least comparable performance compared with the other existing methods. The SVR method shows a prediction performance competitive with or at least comparable to the previously developed linear regression-based methods for predicting RWCO values. In contrast to support vector classification (SVC), SVR is very good at estimating the raw value profiles of the samples. The successful application of the SVR approach in this study reinforces the fact that support vector regression is a powerful tool in extracting the protein sequence-structure relationship and in estimating the protein structural profiles from amino acid sequences.

  7. Lrp4 and Wise interplay controls the formation and patterning of mammary and other skin appendage placodes by modulating Wnt signaling.

    PubMed

    Ahn, Youngwook; Sims, Carrie; Logue, Jennifer M; Weatherbee, Scott D; Krumlauf, Robb

    2013-02-01

    The future site of skin appendage development is marked by a placode during embryogenesis. Although Wnt/β-catenin signaling is known to be essential for skin appendage development, it is unclear which cellular processes are controlled by the signaling and how the precise level of the signaling activity is achieved during placode formation. We have investigated roles for Lrp4 and its potential ligand Wise (Sostdc1) in mammary and other skin appendage placodes. Lrp4 mutant mice displayed a delay in placode initiation and changes in distribution and number of mammary precursor cells leading to abnormal morphology, number and position of mammary placodes. These Lrp4 mammary defects, as well as limb defects, were associated with elevated Wnt/β-catenin signaling and were rescued by reducing the dose of the Wnt co-receptor genes Lrp5 and Lrp6, or by inactivating the gene encoding β-catenin. Wise-null mice phenocopied a subset of the Lrp4 mammary defects and Wise overexpression reduced the number of mammary precursor cells. Genetic epistasis analyses suggest that Wise requires Lrp4 to exert its function and that, together, they have a role in limiting mammary fate, but Lrp4 has an early Wise-independent role in facilitating placode formation. Lrp4 and Wise mutants also share defects in vibrissa and hair follicle development, suggesting that the roles played by Lrp4 and Wise are common to skin appendages. Our study presents genetic evidence for interplay between Lrp4 and Wise in inhibiting Wnt/β-catenin signaling and provides an insight into how modulation of Wnt/β-catenin signaling controls cellular processes important for skin placode formation.

  8. Quasar probabilities and redshifts from WISE mid-IR through GALEX UV photometry

    NASA Astrophysics Data System (ADS)

    DiPompeo, M. A.; Bovy, J.; Myers, A. D.; Lang, D.

    2015-09-01

    Extreme deconvolution (XD) of broad-band photometric data can both separate stars from quasars and generate probability density functions for quasar redshifts, while incorporating flux uncertainties and missing data. Mid-infrared photometric colours are now widely used to identify hot dust intrinsic to quasars, and the release of all-sky WISE data has led to a dramatic increase in the number of IR-selected quasars. Using forced photometry on public WISE data at the locations of Sloan Digital Sky Survey (SDSS) point sources, we incorporate this all-sky data into the training of the XDQSOz models originally developed to select quasars from optical photometry. The combination of WISE and SDSS information is far more powerful than SDSS alone, particularly at z > 2. The use of SDSS+WISE photometry is comparable to the use of SDSS+ultraviolet+near-IR data. We release a new public catalogue of 5537 436 (total; 3874 639 weighted by probability) potential quasars with probability PQSO > 0.2. The catalogue includes redshift probabilities for all objects. We also release an updated version of the publicly available set of codes to calculate quasar and redshift probabilities for various combinations of data. Finally, we demonstrate that this method of selecting quasars using WISE data is both more complete and efficient than simple WISE colour-cuts, especially at high redshift. Our fits verify that above z ˜ 3 WISE colours become bluer than the standard cuts applied to select quasars. Currently, the analysis is limited to quasars with optical counterparts, and thus cannot be used to find highly obscured quasars that WISE colour-cuts identify in significant numbers.

  9. Education in an Era of Accountability: Do You Have to Sacrifice Wise Practices?

    ERIC Educational Resources Information Center

    Flannery Quinn, Suzanne M.; Ethridge, Elizabeth A.

    2006-01-01

    This research explores the history, philosophies and practices of an "A rated" public charter school (serving infants through 8th grade) in Florida. Participants are the professional educators who were involved in the founding of the school in 1999. Findings are based on semi-structured interviews probing the details of the history of…

  10. Streamline Basal Application of Herbicide for Small-Stem Hardwood Control

    Treesearch

    James H. Miller

    1990-01-01

    Abstract. The effectiveness of low-volume b a s a lapplication of herbicide - "streamline" application - was evaluated on 25 hardwood species and loblolly pine. Test mixtures were step-wise rates of Garlon 4 mixed in diesel fuel with a penatrant added. Most comparisons tested 1O%, 20%, and 30% mixtures of Garlon 4, while tests with...

  11. A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading

    NASA Astrophysics Data System (ADS)

    Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo

    A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).

  12. Effect of bar-code technology on the safety of medication administration.

    PubMed

    Poon, Eric G; Keohane, Carol A; Yoon, Catherine S; Ditmore, Matthew; Bane, Anne; Levtzion-Korach, Osnat; Moniz, Thomas; Rothschild, Jeffrey M; Kachalia, Allen B; Hayes, Judy; Churchill, William W; Lipsitz, Stuart; Whittemore, Anthony D; Bates, David W; Gandhi, Tejal K

    2010-05-06

    Serious medication errors are common in hospitals and often occur during order transcription or administration of medication. To help prevent such errors, technology has been developed to verify medications by incorporating bar-code verification technology within an electronic medication-administration system (bar-code eMAR). We conducted a before-and-after, quasi-experimental study in an academic medical center that was implementing the bar-code eMAR. We assessed rates of errors in order transcription and medication administration on units before and after implementation of the bar-code eMAR. Errors that involved early or late administration of medications were classified as timing errors and all others as nontiming errors. Two clinicians reviewed the errors to determine their potential to harm patients and classified those that could be harmful as potential adverse drug events. We observed 14,041 medication administrations and reviewed 3082 order transcriptions. Observers noted 776 nontiming errors in medication administration on units that did not use the bar-code eMAR (an 11.5% error rate) versus 495 such errors on units that did use it (a 6.8% error rate)--a 41.4% relative reduction in errors (P<0.001). The rate of potential adverse drug events (other than those associated with timing errors) fell from 3.1% without the use of the bar-code eMAR to 1.6% with its use, representing a 50.8% relative reduction (P<0.001). The rate of timing errors in medication administration fell by 27.3% (P<0.001), but the rate of potential adverse drug events associated with timing errors did not change significantly. Transcription errors occurred at a rate of 6.1% on units that did not use the bar-code eMAR but were completely eliminated on units that did use it. Use of the bar-code eMAR substantially reduced the rate of errors in order transcription and in medication administration as well as potential adverse drug events, although it did not eliminate such errors. Our data show that the bar-code eMAR is an important intervention to improve medication safety. (ClinicalTrials.gov number, NCT00243373.) 2010 Massachusetts Medical Society

  13. Development and implementation of a human accuracy program in patient foodservice.

    PubMed

    Eden, S H; Wood, S M; Ptak, K M

    1987-04-01

    For many years, industry has utilized the concept of human error rates to monitor and minimize human errors in the production process. A consistent quality-controlled product increases consumer satisfaction and repeat purchase of product. Administrative dietitians have applied the concepts of using human error rates (the number of errors divided by the number of opportunities for error) at four hospitals, with a total bed capacity of 788, within a tertiary-care medical center. Human error rate was used to monitor and evaluate trayline employee performance and to evaluate layout and tasks of trayline stations, in addition to evaluating employees in patient service areas. Long-term employees initially opposed the error rate system with some hostility and resentment, while newer employees accepted the system. All employees now believe that the constant feedback given by supervisors enhances their self-esteem and productivity. Employee error rates are monitored daily and are used to counsel employees when necessary; they are also utilized during annual performance evaluation. Average daily error rates for a facility staffed by new employees decreased from 7% to an acceptable 3%. In a facility staffed by long-term employees, the error rate increased, reflecting improper error documentation. Patient satisfaction surveys reveal satisfaction, for tray accuracy increased from 88% to 92% in the facility staffed by long-term employees and has remained above the 90% standard in the facility staffed by new employees.

  14. WISE TF: A MID-INFRARED, 3.4 {mu}m EXTENSION OF THE TULLY-FISHER RELATION USING WISE PHOTOMETRY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lagattuta, David J.; Mould, Jeremy R.; Staveley-Smith, Lister

    2013-07-10

    We present a mid-infrared Tully-Fisher (TF) relation using photometry from the 3.4 {mu}m W1 band of the Wide-field Infrared Survey Explorer (WISE) satellite. The WISE TF relation is formed from 568 galaxies taken from the all-sky 2MASS Tully-Fisher (2MTF) galaxy catalog, spanning a range of environments including field, group, and cluster galaxies. This constitutes the largest mid-infrared TF relation constructed to date. After applying a number of corrections to galaxy magnitudes and line widths, we measure a master TF relation given by M{sub corr} = -22.24 - 10.05[log (W{sub corr}) - 2.5], with an average dispersion of {sigma}{sub WISE} =more » 0.686 mag. There is some tension between WISE TF and a preliminary 3.6 {mu}m relation, which has a shallower slope and almost no intrinsic dispersion. However, our results agree well with a more recent relation constructed from a large sample of cluster galaxies. We additionally compare WISE TF to the near-infrared 2MTF template relations, finding a good agreement between the TF parameters and total dispersions of WISE TF and the 2MTF K-band template. This fact, coupled with typical galaxy colors of (K - W1) {approx} 0, suggests that these two bands are tracing similar stellar populations, including the older, centrally-located stars in the galactic bulge which can (for galaxies with a prominent bulge) dominate the light profile.« less

  15. WashWise cleans up the Northwest: Lessons learned from the Northwest high-efficiency clothes washer initiative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordon, L.M.; Banks, D.L.; Brenneke, M.E.

    1998-07-01

    WashWise is a regional market transformation program designed to promote the sale and acceptance or resource-efficient clothes washers (RECWs) in the Northwest through financial incentives, education, and marketing. The Program is sponsored by the Northwest Energy Efficiency Alliance (the Alliance), a non-profit regional consortium of utilities, government, public interest groups, and private sector organizations. WashWise started in May 1997 and will continue through the end of 1999. WashWise works to transform the clothes washer market primarily at the retail level through an in-store instant rebate and a retailer bonus. In addition to financial incentives, WashWise has undertaken a collaborative marketingmore » and promotional campaign to educate consumers about the financial savings and other benefits of RECWs. The program promotes only RECWs that meet strict energy and water savings criteria. WashWise has far exceeded initial expectations; annual program sales goals were met in the first three months. As of June 1998, 30,000 RECWs have been sold through the program (representing approximately 13 percent of the Northwest residential clothes washer market). In addition, over 540 retailers, including national and regional chains, are participating in the program. Preliminary survey results also have also provided evidence of broad customer satisfaction. This paper reviews the key elements that have contributed to the success of the WashWise program. In addition, the paper provides program results and indicates future directions for WashWise and the RECW market.« less

  16. Paternal sperm DNA methylation associated with early signs of autism risk in an autism-enriched cohort.

    PubMed

    Feinberg, Jason I; Bakulski, Kelly M; Jaffe, Andrew E; Tryggvadottir, Rakel; Brown, Shannon C; Goldman, Lynn R; Croen, Lisa A; Hertz-Picciotto, Irva; Newschaffer, Craig J; Fallin, M Daniele; Feinberg, Andrew P

    2015-08-01

    Epigenetic mechanisms such as altered DNA methylation have been suggested to play a role in autism, beginning with the classical association of Prader-Willi syndrome, an imprinting disorder, with autistic features. Here we tested for the relationship of paternal sperm DNA methylation with autism risk in offspring, examining an enriched-risk cohort of fathers of autistic children. We examined genome-wide DNA methylation (DNAm) in paternal semen biosamples obtained from an autism spectrum disorder (ASD) enriched-risk pregnancy cohort, the Early Autism Risk Longitudinal Investigation (EARLI) cohort, to estimate associations between sperm DNAm and prospective ASD development, using a 12-month ASD symptoms assessment, the Autism Observation Scale for Infants (AOSI). We analysed methylation data from 44 sperm samples run on the CHARM 3.0 array, which contains over 4 million probes (over 7 million CpG sites), including 30 samples also run on the Illumina Infinium HumanMethylation450 (450K) BeadChip platform (∼485 000 CpG sites). We also examined associated regions in an independent sample of post-mortem human brain ASD and control samples for which Illumina 450K DNA methylation data were available. Using region-based statistical approaches, we identified 193 differentially methylated regions (DMRs) in paternal sperm with a family-wise empirical P-value [family-wise error rate (FWER)] <0.05 associated with performance on the Autism Observational Scale for Infants (AOSI) at 12 months of age in offspring. The DMRs clustered near genes involved in developmental processes, including many genes in the SNORD family, within the Prader-Willi syndrome gene cluster. These results were consistent among the 75 probes on the Illumina 450K array that cover AOSI-associated DMRs from CHARM. Further, 18 of 75 (24%) 450K array probes showed consistent differences in the cerebellums of autistic individuals compared with controls. These data suggest that epigenetic differences in paternal sperm may contribute to autism risk in offspring, and provide evidence that directionally consistent, potentially related epigenetic mechanisms may be operating in the cerebellum of individuals with autism. © The Author 2015; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  17. Water Clouds in the Atmosphere of a Jupiter-Like Brown Dwarf

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-07-01

    Lying a mere 7.2 light-years away, WISE 0855 is the nearest known planetary-mass object. This brown dwarf, a failed star just slightly more massive than Jupiter, is also the coldest known compact body outside of our solar system and new observations have now provided us with a first look at its atmosphere.Temperaturepressure profiles of Jupiter, WISE 0855, and what was previously the coldest extrasolar object with a 5-m spectrum, Gl 570D. Thicker lines show the location of each objects 5-m photospheres. WISE 0855s and Jupiters photospheres are near the point where water starts to condense out into clouds (dashed line). [Skemer et al. 2016]Challenging ObservationsWith a chilly temperature of 250 K, the brown dwarf WISE 0855 is the closest thing weve been able to observe to a body resembling Jupiters ~130 K. WISE 0855 therefore presents an intriguing opportunity to directly study the atmosphere of an object whose physical characteristics are similar to our own gas giants.But studying the atmospheric characteristics of such a body is tricky. WISE 0855 is too cold and faint to be able to obtain traditional optical or near-infrared ( 2.5 m) spectroscopy of it. Luckily, like Jupiter, the opacity of its gas allows thermal emission from its deep atmosphere to escape through an atmospheric window around ~5 m.A team of scientists led by Andrew Skemer (UC Santa Cruz) set out to observe WISE 0855 in this window with the Gemini-North telescope and the Gemini Near-Infrared Spectrograph. Though WISE 0855 is five times fainter than the faintest object previously detected with ground-based 5-m spectroscopy, the dry air of Mauna Kea (and a lot of patience!) allowed the team to obtain unprecedented spectra of this object.WISE 0855s spectrum shows absorption features consistent with water vapor, and its best fit by a cloudy brown-dwarf model. [Skemer et al. 2016]Water Clouds FoundExoplanets and brown dwarfs cooler than ~350 K are expected to form water ice clouds in upper atmosphere and these clouds should be thick enough to alter the emergent spectrum that we observe. Does WISE 0855 fit this picture?Yes! By modeling the spectrum of WISE 0855, Skemer and collaborators demonstrate that its completely dominated by water absorption lines. This represents the first evidence of water clouds in a body outside of our solar system.Atmospheric TurbulenceWISE 0855s water absorption profile bears a striking resemblance to Jupiters. Where the spectra differ, however, is in the lower-wavelength end of observations: Jupiter also shows absorption by a molecule called phosphine, whereas WISE 0855 doesnt.Jupiters spectrum is strikingly similar to WISE 0855s from 4.8 to 5.2 m, where both objects are dominated by water absorption. But from 4.5 to 4.8 m, Jupiters spectrum is dominated by phosphine absorption, indicating a turbulent atmosphere, while WISE 0855s is not. [Skemer et al. 2016]Interestingly, if the bodies were both in equilibrium, neither WISE 0855 nor Jupiter should contain detectable phosphine in their photospheres. The reason Jupiter does is because theres a significant amount of turbulent mixing in its atmosphere that dredges up phosphine from the planets hot interior. The fact that WISE 0855 has no sign of phosphine suggests its atmosphere may be much less turbulent than Jupiters.These observations represent an important step as we attempt to understand the atmospheres of extrasolar bodies that are similar to our own gas-giant planets. Observations of other such bodies in the future especially using new technology like the James Webb Space Telescope will allow us to learn more about the dynamical and chemical processes that occur in cold atmospheres.CitationAndrew J. Skemer et al 2016 ApJ 826 L17. doi:10.3847/2041-8205/826/2/L17

  18. The influence of the structure and culture of medical group practices on prescription drug errors.

    PubMed

    Kralewski, John E; Dowd, Bryan E; Heaton, Alan; Kaissi, Amer

    2005-08-01

    This project was designed to identify the magnitude of prescription drug errors in medical group practices and to explore the influence of the practice structure and culture on those error rates. Seventy-eight practices serving an upper Midwest managed care (Care Plus) plan during 2001 were included in the study. Using Care Plus claims data, prescription drug error rates were calculated at the enrollee level and then were aggregated to the group practice that each enrollee selected to provide and manage their care. Practice structure and culture data were obtained from surveys of the practices. Data were analyzed using multivariate regression. Both the culture and the structure of these group practices appear to influence prescription drug error rates. Seeing more patients per clinic hour, more prescriptions per patient, and being cared for in a rural clinic were all strongly associated with more errors. Conversely, having a case manager program is strongly related to fewer errors in all of our analyses. The culture of the practices clearly influences error rates, but the findings are mixed. Practices with cohesive cultures have lower error rates but, contrary to our hypothesis, cultures that value physician autonomy and individuality also have lower error rates than those with a more organizational orientation. Our study supports the contention that there are a substantial number of prescription drug errors in the ambulatory care sector. Even by the strictest definition, there were about 13 errors per 100 prescriptions for Care Plus patients in these group practices during 2001. Our study demonstrates that the structure of medical group practices influences prescription drug error rates. In some cases, this appears to be a direct relationship, such as the effects of having a case manager program on fewer drug errors, but in other cases the effect appears to be indirect through the improvement of drug prescribing practices. An important aspect of this study is that it provides insights into the relationships of the structure and culture of medical group practices and prescription drug errors and provides direction for future research. Research focused on the factors influencing the high error rates in rural areas and how the interaction of practice structural and cultural attributes influence error rates would add important insights into our findings. For medical practice directors, our data show that they should focus on patient care coordination to reduce errors.

  19. Emergency department discharge prescription errors in an academic medical center

    PubMed Central

    Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.

    2017-01-01

    This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm. PMID:28405061

  20. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains

    PubMed Central

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-01-01

    Background Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. Objectives We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Methods Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Results Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Conclusions Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. PMID:27193033

  1. Parenting Interventions Implementation Science: How Delivery Format Impacts the Parenting Wisely Program

    ERIC Educational Resources Information Center

    Cotter, Katie L.; Bacallao, Martica; Smokowski, Paul R.; Robertson, Caroline I. B.

    2013-01-01

    Objectives: This study examines the implementation and effectiveness of Parenting Wisely, an Internet-based parenting skills intervention. The study assesses whether parents benefit from Parenting Wisely participation and whether the delivery format influences program effectiveness. Method: This study uses a quasi-experimental design.…

  2. Spending Money Wisely.

    ERIC Educational Resources Information Center

    Wentworth, Donald R.; And Others

    1982-01-01

    The theme article of this issue, "Spending Money Wisely," by Donald R. Wentworth, begins with an explanation of basic strategies which aid wise spending. The article goes on to provide an introduction to economic reasoning related to consumer purchases and focusing on the role of incentives, scarcity, and alternatives. Four teaching units follow…

  3. Choosing Wisely When It Comes to Eye Care: Punctal Plugs for Dry Eye

    MedlinePlus

    ... Wisely campaign is available at Choosing Wisely . Related Stories Solar Eclipse Inflicts Damage in the Shape of the Eclipse Itself Dec 08, 2017 Eye Injuries from Laundry Packets On the Rise Jun 30, 2017 New Technology Helps the Legally Blind Be More Independent Oct ...

  4. Constructing a WISE High Resolution Galaxy Atlas

    NASA Technical Reports Server (NTRS)

    Jarrett, T. H.; Masci, F.; Tsai, C. W.; Petty, S.; Cluver, M.; Assef, Roberto J.; Benford, D.; Blain, A.; Bridge, C.; Donoso, E.; hide

    2012-01-01

    After eight months of continuous observations, the Wide-field Infrared Survey Explorer (WISE) mapped the entire sky at 3.4 micron, 4.6 micron, 12 micron, and 22 micron. We have begun a dedicated WISE High Resolution Galaxy Atlas project to fully characterize large, nearby galaxies and produce a legacy image atlas and source catalog. Here we summarize the deconvolution techniques used to significantly improve the spatial resolution of WISE imaging, specifically designed to study the internal anatomy of nearby galaxies. As a case study, we present results for the galaxy NGC 1566, comparing the WISE enhanced-resolution image processing to that of Spitzer, Galaxy Evolution Explorer, and ground-based imaging. This is the first paper in a two-part series; results for a larger sample of nearby galaxies are presented in the second paper.

  5. Impact of HealthWise South Africa on polydrug use and high-risk sexual behavior.

    PubMed

    Tibbits, Melissa K; Smith, Edward A; Caldwell, Linda L; Flisher, Alan J

    2011-08-01

    This study was designed to evaluate the efficacy of the HealthWise South Africa HIV and substance abuse prevention program at impacting adolescents' polydrug use and sexual risk behaviors. HealthWise is a school-based intervention designed to promote social-emotional skills, increase knowledge and refusal skills relevant to substance use and sexual behaviors, and encourage healthy free time activities. Four intervention schools in one township near Cape Town, South Africa were matched to five comparison schools (N = 4040). The sample included equal numbers of male and female participants (Mean age = 14.0). Multiple regression was used to assess the impact of HealthWise on the outcomes of interest. Findings suggest that among virgins at baseline (beginning of eighth grade) who had sex by Wave 5 (beginning of 10th grade), HealthWise youth were less likely than comparison youth to engage in two or more risk behaviors at last sex. Additionally, HealthWise was effective at slowing the onset of frequent polydrug use among non-users at baseline and slowing the increase in this outcome among all participants. Program effects were not found for lifetime sexual activity, condomless sex refusal and past-month polydrug use. These findings suggest that HealthWise is a promising approach to HIV and substance abuse prevention.

  6. Sustainable Materials Management (SMM) WasteWise Data

    EPA Pesticide Factsheets

    EPA??s WasteWise encourages organizations and businesses to achieve sustainability in their practices and reduce select industrial wastes. WasteWise is part of EPA??s sustainable materials management efforts, which promote the use and reuse of materials more productively over their entire lifecycles. All U.S. businesses, governments and nonprofit organizations can join WasteWise as a partner, endorser or both. Current participants range from small local governments and nonprofit organizations to large multinational corporations. Partners demonstrate how they reduce waste, practice environmental stewardship and incorporate sustainable materials management into their waste-handling processes. Endorsers promote enrollment in WasteWise as part of a comprehensive approach to help their stakeholders realize the economic benefits to reducing waste. WasteWise helps organizations reduce their impact on global climate change through waste reduction. Every stage of a product's life cycle??extraction, manufacturing, distribution, use and disposal??indirectly or directly contributes to the concentration of greenhouse gases (GHGs) in the atmosphere and affects the global climate. WasteWise is part of EPA's larger SMM program (https://www.epa.gov/smm). Sustainable Materials Management (SMM) is a systemic approach to using and reusing materials more productively over their entire lifecycles. It represents a change in how our society thinks about the use of natural resources

  7. Personalised, predictive and preventive medication process in hospitals—still rather missing: professional opinion survey on medication safety in Czech hospitals (based on professional opinions of recognised Czech health care experts)

    PubMed Central

    2014-01-01

    The survey had the following aims: (1) to rationalise the hypothesis that risks and losses relating to medication process' errors in Czech hospitals are at least comparable with the other developed countries and EU countries especially, (2) to get a valid professional opinion/estimate on the rate of adverse drug events happening in Czech hospitals, (3) to point out that medication errors represent real and serious risks and (4) to induce the hospital management readiness to execute fundamental changes and improvements to medication processes. We read through a lot of studies inquiring into hospitals' medication safety. Then, we selected the studies which brought reliable findings and formulated credible conclusions. Finally, we addressed reputable Czech experts in health care and asked them structured questions whether the studies' findings and conclusions corresponded with our respondents' own experience in the Czech hospital clinical practice and what their own estimates of adverse drug events' consequences were like. Based on the reputable Czech health care expert opinions/estimates, the rate of a false drug administration may exceed 5%, and over 7% of those cause serious health complications to Czech hospital inpatients. Measured by an average length of stay (ALOS), the Czech inpatients, harmed by a false drug administration, stay in hospital for more than 2.6 days longer than necessary. Any positive changes to a currently used, traditional, ways of drug dispensing and administration, along with computerisation, automation, electronic traceability, validation, or verification, must well pay off. Referring to the above results, it seems to be wise to follow the EU priorities in health and health care improvements. Thus, a right usage of the financial means provided by the EC—in terms of its new health programmes for the period 2014–2020 (e.g. Horizon 2020)—has a good chance of a good result in doing the right things right, at the right time and in the right way. All citizens of the EU may benefit using the best practice. PMID:24834138

  8. Integrating mean and variance heterogeneities to identify differentially expressed genes.

    PubMed

    Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen

    2016-12-06

    In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment-wide significant MVDE genes. Our results indicate tremendous potential gain of integrating informative variance heterogeneity after adjusting for global confounders and background data structure. The proposed informative integration test better summarizes the impacts of condition change on expression distributions of susceptible genes than do the existent competitors. Therefore, particular attention should be paid to explicitly exploit the variance heterogeneity induced by condition change in functional genomics analysis.

  9. Dispensing error rate after implementation of an automated pharmacy carousel system.

    PubMed

    Oswald, Scott; Caldwell, Richard

    2007-07-01

    A study was conducted to determine filling and dispensing error rates before and after the implementation of an automated pharmacy carousel system (APCS). The study was conducted in a 613-bed acute and tertiary care university hospital. Before the implementation of the APCS, filling and dispensing rates were recorded during October through November 2004 and January 2005. Postimplementation data were collected during May through June 2006. Errors were recorded in three areas of pharmacy operations: first-dose or missing medication fill, automated dispensing cabinet fill, and interdepartmental request fill. A filling error was defined as an error caught by a pharmacist during the verification step. A dispensing error was defined as an error caught by a pharmacist observer after verification by the pharmacist. Before implementation of the APCS, 422 first-dose or missing medication orders were observed between October 2004 and January 2005. Independent data collected in December 2005, approximately six weeks after the introduction of the APCS, found that filling and error rates had increased. The filling rate for automated dispensing cabinets was associated with the largest decrease in errors. Filling and dispensing error rates had decreased by December 2005. In terms of interdepartmental request fill, no dispensing errors were noted in 123 clinic orders dispensed before the implementation of the APCS. One dispensing error out of 85 clinic orders was identified after implementation of the APCS. The implementation of an APCS at a university hospital decreased medication filling errors related to automated cabinets only and did not affect other filling and dispensing errors.

  10. ["Choosing wisely" in infectious diseases : Overuse of antibiotics - too few vaccinations].

    PubMed

    Jung, N; Koop, H; Riessen, R; Galle, J-C; Jany, B; Märker-Hermann, E

    2016-06-01

    The "choosing wisely" recommendations of the German Society of Internal Medicine (DGIM) and its specialist societies address diagnostic and therapeutic procedures, which are of particular medical importance but applied too often or too rarely in clinical practice. The aim is to further improve treatment of patients. Important topics of overuse and insufficient treatment related to the diagnostics, therapy, prevention and exclusion of infectious diseases could be identified. These topics not only play an important role in the discipline of infectious diseases but are also relevant for other internal medical disciplines. These topics related to infectious diseases have also been integrated into the recommendations of the German Society of Gastroenterology, Digestive and Metabolic Diseases as well as the German Societies for Internal Intensive Care and Emergency Medicine, for Pneumology, for Nephrology and for Rheumatology. The pivotal issues of the recommendations are the inappropriate use of antibiotics and insufficient vaccination rates.

  11. Rigorous covariance propagation of geoid errors to geodetic MDT estimates

    NASA Astrophysics Data System (ADS)

    Pail, R.; Albertella, A.; Fecher, T.; Savcenko, R.

    2012-04-01

    The mean dynamic topography (MDT) is defined as the difference between the mean sea surface (MSS) derived from satellite altimetry, averaged over several years, and the static geoid. Assuming geostrophic conditions, from the MDT the ocean surface velocities as important component of global ocean circulation can be derived from it. Due to the availability of GOCE gravity field models, for the very first time MDT can now be derived solely from satellite observations (altimetry and gravity) down to spatial length-scales of 100 km and even below. Global gravity field models, parameterized in terms of spherical harmonic coefficients, are complemented by the full variance-covariance matrix (VCM). Therefore, for the geoid component a realistic statistical error estimate is available, while the error description of the altimetric component is still an open issue and is, if at all, attacked empirically. In this study we make the attempt to perform, based on the full gravity VCM, rigorous error propagation to derived geostrophic surface velocities, thus also considering all correlations. For the definition of the static geoid we use the third release of the time-wise GOCE model, as well as the satellite-only combination model GOCO03S. In detail, we will investigate the velocity errors resulting from the geoid component in dependence of the harmonic degree, and the impact of using/no using covariances on the MDT errors and its correlations. When deriving an MDT, it is spectrally filtered to a certain maximum degree, which is usually driven by the signal content of the geoid model, by applying isotropic or non-isotropic filters. Since this filtering is acting also on the geoid component, the consistent integration of this filter process into the covariance propagation shall be performed, and its impact shall be quantified. The study will be performed for MDT estimates in specific test areas of particular oceanographic interest.

  12. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    PubMed Central

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Cai, Weixing; Rottmann, Joerg; Li, Ruijiang; Williams, Christopher; Wagar, Matthew; Berbeco, Ross; Ionascu, Dan; Lewis, John H.

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we develop and perform initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and use these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparing to ground truth digital and physical phantom images. The performance of 4DCBCT- and 4DCT- based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms, and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. PMID:25905722

  13. Choosing Wisely in Emergency Medicine: A National Survey of Emergency Medicine Academic Chairs and Division Chiefs.

    PubMed

    Maughan, Brandon C; Baren, Jill M; Shea, Judy A; Merchant, Raina M

    2015-12-01

    The Choosing Wisely campaign was launched in 2011 to promote stewardship of medical resources by encouraging patients and physicians to speak with each other regarding the appropriateness of common tests and procedures. Medical societies including the American College of Emergency Physicians (ACEP) have developed lists of potentially low-value practices for their members to address with patients. No research has described the awareness or attitudes of emergency physicians (EPs) regarding the Choosing Wisely campaign. The study objective was to assess these beliefs among leaders of academic departments of emergency medicine (EM). This was a Web-based survey of emergency department (ED) chairs and division chiefs at institutions with allopathic EM residency programs. The survey examined awareness of Choosing Wisely, anticipated effects of the program, and discussions of Choosing Wisely with patients and professional colleagues. Participants also identified factors they associated with the use of potentially low-value services in the ED. Questions and answer scales were refined using iterative pilot testing with EPs and health services researchers. Seventy-eight percent (105/134) of invited participants responded to the survey. Eighty percent of respondents were aware of Choosing Wisely. A majority of participants anticipate the program will decrease costs of care (72% of respondents) and use of ED diagnostic imaging (69%) but will have no effect on EP salaries (94%) or medical-legal risks (65%). Only 45% of chairs have ever addressed Choosing Wisely with patients, in contrast to 88 and 82% who have discussed it with faculty and residents, respectively. Consultant-requested tests were identified by 97% of residents as a potential contributor to low-value services in the ED. A substantial majority of academic EM leaders in our study were aware of Choosing Wisely, but only slightly more than half could recall any ACEP recommendations for the program. Respondents familiar with Choosing Wisely anticipated generally positive effects, but chairs reported only infrequently discussing Choosing Wisely with patients. Future research should identify potentially low-value tests requested by consultants and objectively measure the utility and cost of these tests among ED patient populations. © 2015 by the Society for Academic Emergency Medicine.

  14. Multivariate functional response regression, with application to fluorescence spectroscopy in a cervical pre-cancer study.

    PubMed

    Zhu, Hongxiao; Morris, Jeffrey S; Wei, Fengrong; Cox, Dennis D

    2017-07-01

    Many scientific studies measure different types of high-dimensional signals or images from the same subject, producing multivariate functional data. These functional measurements carry different types of information about the scientific process, and a joint analysis that integrates information across them may provide new insights into the underlying mechanism for the phenomenon under study. Motivated by fluorescence spectroscopy data in a cervical pre-cancer study, a multivariate functional response regression model is proposed, which treats multivariate functional observations as responses and a common set of covariates as predictors. This novel modeling framework simultaneously accounts for correlations between functional variables and potential multi-level structures in data that are induced by experimental design. The model is fitted by performing a two-stage linear transformation-a basis expansion to each functional variable followed by principal component analysis for the concatenated basis coefficients. This transformation effectively reduces the intra-and inter-function correlations and facilitates fast and convenient calculation. A fully Bayesian approach is adopted to sample the model parameters in the transformed space, and posterior inference is performed after inverse-transforming the regression coefficients back to the original data domain. The proposed approach produces functional tests that flag local regions on the functional effects, while controlling the overall experiment-wise error rate or false discovery rate. It also enables functional discriminant analysis through posterior predictive calculation. Analysis of the fluorescence spectroscopy data reveals local regions with differential expressions across the pre-cancer and normal samples. These regions may serve as biomarkers for prognosis and disease assessment.

  15. Using steady-state equations for transient flow calculation in natural gas pipelines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maddox, R.N.; Zhou, P.

    1984-04-02

    Maddox and Zhou have extended their technique for calculating the unsteady-state behavior of straight gas pipelines to complex pipeline systems and networks. After developing the steady-state flow rate and pressure profile for each pipe in the network, analysts can perform the transient-state analysis in the real-time step-wise manner described for this technique.

  16. WISE Science: Web-based Inquiry in the Classroom. Technology, Education--Connections

    ERIC Educational Resources Information Center

    Slotta, James D.; Linn, Marcia C.

    2009-01-01

    This book shares the lessons learned by a large community of educational researchers and science teachers as they designed, developed, and investigated a new technology-enhanced learning environment known as WISE: The Web-Based Inquiry Science Environment. WISE offers a collection of free, customizable curriculum projects on topics central to the…

  17. Evaluation of the Wise Guys Male Responsibility Curriculum: Participant-Control Comparisons

    ERIC Educational Resources Information Center

    Gruchow, Harvey William; Brown, Roger K.

    2011-01-01

    Background: Although males are often the initiators of teen sexual activity, pregnancy prevention programs generally target females. To address this deficiency, the Wise Guys Male Responsibility Curriculum was developed to be delivered to adolescent males in weekly classroom sessions. Methods: Seventh grade participants (n = 124) in the Wise Guys…

  18. Brain-Wise Leadership

    ERIC Educational Resources Information Center

    Murphy, Carole; Ozturgut, Osman; French, Joan

    2013-01-01

    The purpose of this article is to help leaders do their jobs more effectively by examining the components of brain-wise leadership. The article is divided into five parts: Part I is a general overview, defining brain-wise leadership, its traits, attributes and some of the styles of effective leadership. Part II begins with the strategies for…

  19. Viewing Violence, Mental Illness and Addiction through a Wise Practices Lens

    ERIC Educational Resources Information Center

    Wesley-Esquimaux, Cynthia C.; Snowball, Andrew

    2010-01-01

    The progressive approaches First Nations, Metis, and Inuit communities use to address health and wellness concerns are rarely written about or acknowledged in a positive manner. This paper speaks to a concept introduced through the Canadian Aboriginal Aids Network (CAAN) entitled "wise practices". CAAN saw a "wise practices"…

  20. R-WISE: A Computerized Environment for Tutoring Critical Literacy.

    ERIC Educational Resources Information Center

    Carlson, P.; Crevoisier, M.

    This paper describes a computerized environment for teaching the conceptual patterns of critical literacy. While the full implementation of the software covers both reading and writing, this paper covers only the writing aspects of R-WISE (Reading and Writing in a Supportive Environment). R-WISE consists of a suite of computerized…

  1. Social class and wise reasoning about interpersonal conflicts across regions, persons and situations

    PubMed Central

    Brienza, Justin P.

    2017-01-01

    We propose that class is inversely related to a propensity for using wise reasoning (recognizing limits of their knowledge, consider world in flux and change, acknowledges and integrate different perspectives) in interpersonal situations, contrary to established class advantage in abstract cognition. Two studies—an online survey from regions differing in economic affluence (n = 2 145) and a representative in-lab study with stratified sampling of adults from working and middle-class backgrounds (n = 299)—tested this proposition, indicating that higher social class consistently related to lower levels of wise reasoning across different levels of analysis, including regional and individual differences, and subjective construal of specific situations. The results held across personal and standardized hypothetical situations, across self-reported and observed wise reasoning, and when controlling for fluid and crystallized cognitive abilities. Consistent with an ecological framework, class differences in wise reasoning were specific to interpersonal (versus societal) conflicts. These findings suggest that higher social class weighs individuals down by providing the ecological constraints that undermine wise reasoning about interpersonal affairs. PMID:29263284

  2. Social class and wise reasoning about interpersonal conflicts across regions, persons and situations.

    PubMed

    Brienza, Justin P; Grossmann, Igor

    2017-12-20

    We propose that class is inversely related to a propensity for using wise reasoning (recognizing limits of their knowledge, consider world in flux and change, acknowledges and integrate different perspectives) in interpersonal situations, contrary to established class advantage in abstract cognition. Two studies-an online survey from regions differing in economic affluence ( n = 2 145) and a representative in-lab study with stratified sampling of adults from working and middle-class backgrounds ( n = 299)-tested this proposition, indicating that higher social class consistently related to lower levels of wise reasoning across different levels of analysis, including regional and individual differences, and subjective construal of specific situations. The results held across personal and standardized hypothetical situations, across self-reported and observed wise reasoning, and when controlling for fluid and crystallized cognitive abilities. Consistent with an ecological framework, class differences in wise reasoning were specific to interpersonal (versus societal) conflicts. These findings suggest that higher social class weighs individuals down by providing the ecological constraints that undermine wise reasoning about interpersonal affairs. © 2017 The Authors.

  3. VizieR Online Data Catalog: WISE Preliminary Data Release (Cutri+ 2011)

    NASA Astrophysics Data System (ADS)

    Cutri, R. M.; et al.

    2012-01-01

    The Wide-field Infrared Survey Explorer (WISE; see Wright et al. 2010AJ....140.1868W) is a NASA Medium Class Explorer mission that conducted a digital imaging survey of the entire sky in the 3.4, 4.6, 12 and 22um mid-infrared bandpasses (hereafter W1, W2, W3 and W4). WISE will produce and release to the world astronomical and educational communities and general public a digital Image Atlas covering the sky in the four survey bands, and a reliable Source Catalog containing accurate photometry and astrometry for over 300 million objects. The WISE Catalog and Atlas will enable a broad variety of research efforts ranging from the search for the closest stars and brown dwarfs to the most luminous galaxies in the Universe. WISE science data products will serve as an important reference data set for planning observations and interpreting data obtained with future ground and space-borne observatories, such as JWST. WISE was launched on 2009-12-14 from Vandenberg SLC2W. (1 data file).

  4. VizieR Online Data Catalog: WISE All-Sky Data Release (Cutri+ 2012)

    NASA Astrophysics Data System (ADS)

    Cutri, R. M.; et al.

    2012-04-01

    The Wide-field Infrared Survey Explorer (WISE; see Wright et al. 2010AJ....140.1868W) is a NASA Medium Class Explorer mission that conducted a digital imaging survey of the entire sky in the 3.4, 4.6, 12 and 22um mid-infrared bandpasses (hereafter W1, W2, W3 and W4). WISE will produce and release to the world astronomical and educational communities and general public a digital Image Atlas covering the sky in the four survey bands, and a reliable Source Catalog containing accurate photometry and astrometry for over 300 million objects. The WISE Catalog and Atlas will enable a broad variety of research efforts ranging from the search for the closest stars and brown dwarfs to the most luminous galaxies in the Universe. WISE science data products will serve as an important reference data set for planning observations and interpreting data obtained with future ground and space-borne observatories, such as JWST. WISE was launched on 2009-12-14 from Vandenberg SLC2W. (1 data file).

  5. [Effect of spatial location on the generality of block-wise conflict adaptation between different types of scripts].

    PubMed

    Watanabe, Yurina; Yoshizaki, Kazuhito

    2014-10-01

    This study aimed to investigate the generality of conflict adaptation associated with block-wise conflict frequency between two types of stimulus scripts (Kanji and Hiragana). To this end, we examined whether the modulation of the compatibility effect with one type of script depending on block-wise conflict frequency (75% versus 25% generalized to the other type of script whose block-wise conflict frequency was kept constant (50%), using the Spatial Stroop task. In Experiment 1, 16 participants were required to identify the target orientation (up or down) presented in the upper or lower visual-field. The results showed that block-wise conflict adaptation with one type of stimulus script generalized to the other. The procedure in Experiment 2 was the same as that in Experiment 1, except that the presentation location differed between the two types of stimulus scripts. We did not find a generalization from one script to the other. These results suggest that presentation location is a critical factor contributing to the generality of block-wise conflict adaptation.

  6. VizieR Online Data Catalog: WISE W1/W2 Tully-Fisher relation calibrator data (Neill+, 2014)

    NASA Astrophysics Data System (ADS)

    Neill, J. D.; Seibert, M.; Tully, R. B.; Courtois, H.; Sorce, J. G.; Jarrett, T. H.; Scowcroft, V.; Masci, F. J.

    2017-04-01

    We have instigated a separate project to provide high-quality surface photometry of all WISE galaxies larger than 0.8' on the sky. The WISE Nearby Galaxy Atlas (WNGA; M. Seibert et al., in preparation) will provide photometry that is quality controlled for over 20000 galaxies. This photometry, optimized for extended sources, significantly reduces the resulting scatter in the Tully-Fisher relation (here after TFR) calibration and thus improves the resulting distances. Having an accurate calibration of the TFR for these two WISE passbands will allow the use of this large sample to explore the structure and dynamics of local galaxy bulk flows. With the current tally, there are 310 cluster calibrators with WISE W1 and W2 photometry, compared with 213 available to Sorce et al. (2013, J/ApJ/765/94) for the Spitzer calibration, and 291 of the 310 WISE calibrators have I-band photometry, compared with the 267 available to Tully & Courtois (2012ApJ...749...78T) for the previous I-band calibration. (1 data file).

  7. Dealing with gene expression missing data.

    PubMed

    Brás, L P; Menezes, J C

    2006-05-01

    Compared evaluation of different methods is presented for estimating missing values in microarray data: weighted K-nearest neighbours imputation (KNNimpute), regression-based methods such as local least squares imputation (LLSimpute) and partial least squares imputation (PLSimpute) and Bayesian principal component analysis (BPCA). The influence in prediction accuracy of some factors, such as methods' parameters, type of data relationships used in the estimation process (i.e. row-wise, column-wise or both), missing rate and pattern and type of experiment [time series (TS), non-time series (NTS) or mixed (MIX) experiments] is elucidated. Improvements based on the iterative use of data (iterative LLS and PLS imputation--ILLSimpute and IPLSimpute), the need to perform initial imputations (modified PLS and Helland PLS imputation--MPLSimpute and HPLSimpute) and the type of relationships employed (KNNarray, LLSarray, HPLSarray and alternating PLS--APLSimpute) are proposed. Overall, it is shown that data set properties (type of experiment, missing rate and pattern) affect the data similarity structure, therefore influencing the methods' performance. LLSimpute and ILLSimpute are preferable in the presence of data with a stronger similarity structure (TS and MIX experiments), whereas PLS-based methods (MPLSimpute, IPLSimpute and APLSimpute) are preferable when estimating NTS missing data.

  8. Differential detection in quadrature-quadrature phase shift keying (Q2PSK) systems

    NASA Astrophysics Data System (ADS)

    El-Ghandour, Osama M.; Saha, Debabrata

    1991-05-01

    A generalized quadrature-quadrature phase shift keying (Q2PSK) signaling format is considered for differential encoding and differential detection. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. Symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/N0. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK. When the error is due to AWGN, the ratio of double error rate to single error rate can be very high, and the ratio may approach zero at high SNR. To improve error rate, differential detection through maximum-likelihood decoding based on multiple or N symbol observations is considered. If N and SNR are large this decoding gives a 3-dB advantage in error rate over conventional N = 2 differential detection, fully recovering the energy loss (as compared to coherent detection) if the observation is extended to a large number of symbol durations.

  9. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    NASA Astrophysics Data System (ADS)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  10. Infrared Astronomy and Education: Linking Infrared Whole Sky Mapping with Teacher and Student Research

    NASA Astrophysics Data System (ADS)

    Borders, Kareen; Mendez, Bryan; Thaller, Michelle; Gorjian, Varoujan; Borders, Kyla; Pitman, Peter; Pereira, Vincent; Sepulveda, Babs; Stark, Ron; Knisely, Cindy; Dandrea, Amy; Winglee, Robert; Plecki, Marge; Goebel, Jeri; Condit, Matt; Kelly, Susan

    The Spitzer Space Telescope and the recently launched WISE (Wide Field Infrared Survey Explorer) observe the sky in infrared light. Among the objects WISE will study are asteroids, the coolest and dimmest stars, and the most luminous galaxies. Secondary students can do authentic research using infrared data. For example, students will use WISE data to mea-sure physical properties of asteroids. In order to prepare students and teachers at this level with a high level of rigor and scientific understanding, the WISE and the Spitzer Space Tele-scope Education programs provided an immersive teacher professional development workshop in infrared astronomy.The lessons learned from the Spitzer and WISE teacher and student pro-grams can be applied to other programs engaging them in authentic research experiences using data from space-borne observatories such as Herschel and Planck. Recently, WISE Educator Ambassadors and NASA Explorer School teachers developed and led an infrared astronomy workshop at Arecibo Observatory in PuertoRico. As many common misconceptions involve scale and distance, teachers worked with Moon/Earth scale, solar system scale, and distance and age of objects in the Universe. Teachers built and used basic telescopes, learned about the history of telescopes, explored ground and satellite based telescopes, and explored and worked on models of WISE Telescope. An in-depth explanation of WISE and the Spitzer telescopes gave participants background knowledge for infrared astronomy observations. We taught the electromagnetic spectrum through interactive stations. We will outline specific steps for sec-ondary astronomy professional development, detail student involvement in infrared telescope data analysis, provide data demonstrating the impact of the above professional development on educator understanding and classroom use, and detail future plans for additional secondary professional development and student involvement in infrared astronomy. Funding was provided by NASA, WISE Telescope, the Spitzer Space Telescope, the American Institute of Aeronautics and Astronautics, the National Optical Astronomy Observatory, Starbucks, and Washington Space Grant Consortium.

  11. Executive Council lists and general practitioner files

    PubMed Central

    Farmer, R. D. T.; Knox, E. G.; Cross, K. W.; Crombie, D. L.

    1974-01-01

    An investigation of the accuracy of general practitioner and Executive Council files was approached by a comparison of the two. High error rates were found, including both file errors and record errors. On analysis it emerged that file error rates could not be satisfactorily expressed except in a time-dimensioned way, and we were unable to do this within the context of our study. Record error rates and field error rates were expressible as proportions of the number of records on both the lists; 79·2% of all records exhibited non-congruencies and particular information fields had error rates ranging from 0·8% (assignation of sex) to 68·6% (assignation of civil state). Many of the errors, both field errors and record errors, were attributable to delayed updating of mutable information. It is concluded that the simple transfer of Executive Council lists to a computer filing system would not solve all the inaccuracies and would not in itself permit Executive Council registers to be used for any health care applications requiring high accuracy. For this it would be necessary to design and implement a purpose designed health care record system which would include, rather than depend upon, the general practitioner remuneration system. PMID:4816588

  12. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system

    PubMed Central

    Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.

    2015-01-01

    Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702

  13. Boosting wisdom: distance from the self enhances wise reasoning, attitudes, and behavior.

    PubMed

    Kross, Ethan; Grossmann, Igor

    2012-02-01

    Although humans strive to be wise, they often fail to do so when reasoning over issues that have profound personal implications. Here we examine whether psychological distance enhances wise reasoning, attitudes and behavior under such circumstances. Two experiments demonstrate that cueing people to reason about personally meaningful issues (Study 1: Career prospects for the unemployed during an economic recession; Study 2: Anticipated societal changes associated with one's chosen candidate losing the 2008 U.S. Presidential election) from a distanced perspective enhances wise reasoning (dialecticism; intellectual humility), attitudes (cooperation-related attitude assimilation), and behavior (willingness to join a bipartisan group).

  14. Optimizing a community-engaged multi-level group intervention to reduce substance use: an application of the multiphase optimization strategy.

    PubMed

    Windsor, Liliane Cambraia; Benoit, Ellen; Smith, Douglas; Pinto, Rogério M; Kugler, Kari C

    2018-04-27

    Rates of alcohol and illicit drug use (AIDU) are consistently similar across racial groups (Windsor and Negi, J Addict Dis 28:258-68, 2009; Keyes et al. Soc Sci Med 124:132-41, 2015). Yet AIDU has significantly higher consequences for residents in distressed communities with concentrations of African Americans (DCAA - i.e., localities with high rates of poverty and crime) who also have considerably less access to effective treatment of substance use disorders (SUD). This project is optimizing Community Wise, an innovative multi-level behavioral-health intervention created in partnership with service providers and residents of distressed communities with histories of SUD and incarceration, to reduce health inequalities related to AIDU. Grounded in critical consciousness theory, community-based participatory research principles (CBPR), and the multiphase optimization strategy (MOST), this study employs a 2 × 2 × 2 × 2 factorial design to engineer the most efficient, effective, and scalable version of Community Wise that can be delivered for US$250 per person or less. This study is fully powered to detect change in AIDU in a sample of 528 men with a histories of SUD and incarceration, residing in Newark, NJ in the United States. A community collaborative board oversees recruitment using a variety of strategies including indigenous field worker sampling, facility-based sampling, community advertisement through fliers, and street outreach. Participants are randomly assigned to one of 16 conditions that include a combination of the following candidate intervention components: peer or licensed facilitator, group dialogue, personal goal development, and community organizing. All participants receive a core critical-thinking component. Data are collected at baseline plus five post-baseline monthly follow ups. Once the optimized Community Wise intervention is identified, it will be evaluated against an existing standard of care in a future randomized clinical trial. This paper describes the protocol of the first ever study using CBPR and MOST to optimize a substance use intervention targeting a marginalized population. Data from this study will culminate in an optimized Community Wise manual; enhanced methodological strategies to develop multi-component scalable interventions using MOST and CBPR; and a better understanding of the application of critical consciousness theory to the field of health inequalities related to AIDU. ClinicalTrials.gov, NCT02951455 . Registered on 1 November 2016.

  15. Adaptive relative pose control for autonomous spacecraft rendezvous and proximity operations with thrust misalignment and model uncertainties

    NASA Astrophysics Data System (ADS)

    Sun, Liang; Zheng, Zewei

    2017-04-01

    An adaptive relative pose control strategy is proposed for a pursue spacecraft in proximity operations on a tumbling target. Relative position vector between two spacecraft is required to direct towards the docking port of the target while the attitude of them must be synchronized. With considering the thrust misalignment of pursuer, an integrated controller for relative translational and relative rotational dynamics is developed by using norm-wise adaptive estimations. Parametric uncertainties, unknown coupled dynamics, and bounded external disturbances are compensated online by adaptive update laws. It is proved via Lyapunov stability theory that the tracking errors of relative pose converge to zero asymptotically. Numerical simulations including six degrees-of-freedom rigid body dynamics are performed to demonstrate the effectiveness of the proposed controller.

  16. Inverse constraints for emission fluxes of atmospheric tracers estimated from concentration measurements and Lagrangian transport

    NASA Astrophysics Data System (ADS)

    Pisso, Ignacio; Patra, Prabir; Breivik, Knut

    2015-04-01

    Lagrangian transport models based on times series of Eulerian fields provide a computationally affordable way of achieving very high resolution for limited areas and time periods. This makes them especially suitable for the analysis of point-wise measurements of atmospheric tracers. We present an application illustrated with examples of greenhouse gases from anthropogenic emissions in urban areas and biogenic emissions in Japan and of pollutants in the Arctic. We asses the algorithmic complexity of the numerical implementation as well as the use of non-procedural techniques such as Object-Oriented programming. We discuss aspects related to the quantification of uncertainty from prior information in the presence of model error and limited number of observations. The case of non-linear constraints is explored using direct numerical optimisation methods.

  17. UAVs and Machine Learning Revolutionising Invasive Grass and Vegetation Surveys in Remote Arid Lands.

    PubMed

    Sandino, Juan; Gonzalez, Felipe; Mengersen, Kerrie; Gaston, Kevin J

    2018-02-16

    The monitoring of invasive grasses and vegetation in remote areas is challenging, costly, and on the ground sometimes dangerous. Satellite and manned aircraft surveys can assist but their use may be limited due to the ground sampling resolution or cloud cover. Straightforward and accurate surveillance methods are needed to quantify rates of grass invasion, offer appropriate vegetation tracking reports, and apply optimal control methods. This paper presents a pipeline process to detect and generate a pixel-wise segmentation of invasive grasses, using buffel grass (Cenchrus ciliaris) and spinifex (Triodia sp.) as examples. The process integrates unmanned aerial vehicles (UAVs) also commonly known as drones, high-resolution red, green, blue colour model (RGB) cameras, and a data processing approach based on machine learning algorithms. The methods are illustrated with data acquired in Cape Range National Park, Western Australia (WA), Australia, orthorectified in Agisoft Photoscan Pro, and processed in Python programming language, scikit-learn, and eXtreme Gradient Boosting (XGBoost) libraries. In total, 342,626 samples were extracted from the obtained data set and labelled into six classes. Segmentation results provided an individual detection rate of 97% for buffel grass and 96% for spinifex, with a global multiclass pixel-wise detection rate of 97%. Obtained results were robust against illumination changes, object rotation, occlusion, background cluttering, and floral density variation.

  18. UAVs and Machine Learning Revolutionising Invasive Grass and Vegetation Surveys in Remote Arid Lands

    PubMed Central

    2018-01-01

    The monitoring of invasive grasses and vegetation in remote areas is challenging, costly, and on the ground sometimes dangerous. Satellite and manned aircraft surveys can assist but their use may be limited due to the ground sampling resolution or cloud cover. Straightforward and accurate surveillance methods are needed to quantify rates of grass invasion, offer appropriate vegetation tracking reports, and apply optimal control methods. This paper presents a pipeline process to detect and generate a pixel-wise segmentation of invasive grasses, using buffel grass (Cenchrus ciliaris) and spinifex (Triodia sp.) as examples. The process integrates unmanned aerial vehicles (UAVs) also commonly known as drones, high-resolution red, green, blue colour model (RGB) cameras, and a data processing approach based on machine learning algorithms. The methods are illustrated with data acquired in Cape Range National Park, Western Australia (WA), Australia, orthorectified in Agisoft Photoscan Pro, and processed in Python programming language, scikit-learn, and eXtreme Gradient Boosting (XGBoost) libraries. In total, 342,626 samples were extracted from the obtained data set and labelled into six classes. Segmentation results provided an individual detection rate of 97% for buffel grass and 96% for spinifex, with a global multiclass pixel-wise detection rate of 97%. Obtained results were robust against illumination changes, object rotation, occlusion, background cluttering, and floral density variation. PMID:29462912

  19. Continuum-wise expansiveness for generic diffeomorphisms

    NASA Astrophysics Data System (ADS)

    Lee, Manseob

    2018-06-01

    Let M be a closed smooth manifold and let be a diffeomorphism. C 1-generically, a continuum-wise expansive satisfies Axiom A without cycles. Let and let . There are a C 1 neighborhood of and a residual set such that for any , g is not continuum-wise expansive, where is the set of all robustly transitive diffeomorphisms on

  20. Impact of HealthWise South Africa on Polydrug Use and High-Risk Sexual Behavior

    ERIC Educational Resources Information Center

    Tibbits, Melissa K.; Smith, Edward A.; Caldwell, Linda L.; Flisher, Alan J.

    2011-01-01

    This study was designed to evaluate the efficacy of the HealthWise South Africa HIV and substance abuse prevention program at impacting adolescents' polydrug use and sexual risk behaviors. HealthWise is a school-based intervention designed to promote social-emotional skills, increase knowledge and refusal skills relevant to substance use and…

  1. A Closer Look at Chinese EFL Learners' Test-Wiseness Strategies in Reading Test

    ERIC Educational Resources Information Center

    Haiyan, Miao; Rilong, Liu

    2016-01-01

    This paper reports on an investigation into the relationship of test-takers' use of test-wiseness strategies to Chinese EFL learners' reading test performance. A test-wiseness questionnaire was administered immediately after the final achievement test to probe into how learners thought while completing the reading section of the test. It was found…

  2. Test-Wiseness Cues in the Options of Mathematics Items.

    ERIC Educational Resources Information Center

    Kuntz, Patricia

    The quality of mathematics multiple choice items and their susceptibility to test wiseness were examined. Test wiseness was defined as "a subject's capacity to utilize the characteristics and formats of the test and/or test taking situation to receive a high score." The study used results of the Graduate Record Examinations Aptitude Test (GRE) and…

  3. A Case Study Showing Parameters Affecting the Quality of Education: Faculty Perspective

    ERIC Educational Resources Information Center

    Kumari, Neeraj

    2014-01-01

    The study aims to examine the faculty members' perspective (age Wise, Gender Wise and Work Experience wise) of parameters affecting the quality of education in an affiliated Undergraduate Engineering Institution in Haryana. It is a descriptive type of research. The data has been collected with the help of 'Questionnaire Based Survey'. The sample…

  4. The SunWise School Program Guide: A School Program that Radiates Good Ideas

    ERIC Educational Resources Information Center

    US Environmental Protection Agency, 2003

    2003-01-01

    To help educators raise sun safety awareness, the U.S. Environmental Protection Agency (EPA) has developed the SunWise School Program, a national education program for children in grades K through 8. SunWise Partner Schools sponsor classroom and schoolwide activities that raise children's awareness of stratospheric ozone depletion, UV radiation,…

  5. Differential Benefits of Memory Training for Minority Older Adults in the SeniorWISE Study

    ERIC Educational Resources Information Center

    McDougall, Graham J., Jr.; Becker, Heather; Pituch, Keenan; Acee, Taylor W.; Vaughan, Phillip W.; Delville, Carol L.

    2010-01-01

    Purpose: Cognitive training improves mental abilities in older adults, but the benefit to minority elders is unclear. We conducted a subgroup analysis of subjects in the SeniorWISE (Wisdom Is Simply Exploration) trial to examine this issue. Design and Methods: SeniorWISE was a Phase 3 randomized trial that enrolled 265 nondemented…

  6. Key Elements of Observing Practice: A Data Wise DVD and Facilitator's Guide

    ERIC Educational Resources Information Center

    Boudett, Kathryn Parker; City, Elizabeth A.; Russell, Marcia K.

    2010-01-01

    Based on the bestselling book "Data Wise: A Step-by-Step Guide to Using Assessment Results to Improve Teaching and Learning", and its companion volume, "Data Wise in Action", this DVD and Facilitator's Guide offer insight into one of the most challenging steps in capturing data about school performance: observing and analyzing instructional…

  7. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains.

    PubMed

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-05-01

    Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  8. Revisiting the Gamma-Ray Source 2FGL J1823.8+4312

    NASA Astrophysics Data System (ADS)

    Stern, Daniel; Assef, Roberto J.

    2013-02-01

    One of the great challenges of gamma-ray astronomy is identifying the lower energy counterparts to these high-energy sources. Recently, in this journal, Massaro et al. attempted to find the counterpart of 2FGL J1823.8+4312, a gamma-ray active galactic nucleus (AGN) of uncertain type from the Second Fermi Large Area Telescope catalog. After considering mid-infrared data in the field from the Wide-field Infrared Survey Explorer (WISE), those authors conclude that the preferred identification of 2FGL J1823.8+4312 is WISE J182352.33+431452.5, despite the fact that the mid-infrared source is undetected at radio energies. They claim that WISE J182352.33+431452.5 constitutes the discovery of a new class of extragalactic X-ray source, either a radio-faint blazar or the prototype of a new class of active galaxy with an enigmatic spectral energy distribution. This conclusion is claimed to be independent of whether or not the WISE source is the actual counterpart to 2FGL J1823.8+4312. Based on a re-analysis of public data in this field and new spectroscopy from Palomar, we conclude that WISE J182352.33+431452.5 is a dust-reddened quasar at z = 0.560, a representative example of a very common extragalactic AGN class. Were WISE J182352.33+431452.5 to be associated with the gamma-ray emission, this would be an unusual and exciting discovery. However, we argue that 2FGL J1823.8+4312 is more likely associated with either WISE J182409.25+431404.7 or, more likely, WISE J182419.04+430949.6, two radio-loud sources in the field. The former is a radio-loud quasar and the latter is an optically variable source with a featureless blue spectrum.

  9. Psychometric properties of the Spanish version of the Body Weight, Image and Self-Esteem Evaluation Questionnaire in patients with severe mental disorders.

    PubMed

    Al-Halabi, Susana; Garcia-Portilla, Maria Paz; Saiz, Pilar Alejandra; Fonseca, Eduardo; Bobes-Bascaran, Maria Teresa; Galván, Gonzalo; Iglesias, Celso; Arrojo, Manuel; Benabarre, Antoni; Goikolea, José Manuel; Sanchez, Emilio; Sarramea, Fernando; Bobes, Julio

    2012-11-01

    Clinicians need brief and valid instruments to monitor the psychosocial impact of weight gain in persons with psychiatric disorders. We examined the psychometric properties of the Spanish version of the Body Weight, Image and Self-Esteem Evaluation (B-WISE) questionnaire in patients with severe mental disorders. The data come from a naturalistic, cross-sectional, validation study conducted at 6 centres in Spain. A total of 211 outpatients with severe mental disorders, 118 with schizophrenia and 93 with bipolar disorder, were evaluated using the B-WISE, the Visual Analogue Scale for Weight and Body Image, and the Clinical Global Impression-Severity (CGI-S). The body mass index was also obtained. The principal component analysis confirms 3 components explaining 50.93% of the variance. The Cronbach α values for B-WISE scales ranged between .55 and .73. Significant Pearson correlations were found between B-WISE total score and CGI-S (r = -0.25; P < .001) and Visual Analogue Scale for Weight and Body Image (r = 0.47; P < .001). The B-WISE discriminates among patients with mild, moderate, and severe mental disorders according to CGI-S scores (F = 6.52; P < .005). Body mass index categorization significantly influenced total B-WISE scores (F = 3.586, P < .050). The B-WISE score corresponding to the 5th and 10th percentiles was 22. We were able to demonstrate that the Spanish version of the B-WISE is a valid instrument for assessing psychosocial impact of weight gain in patients with severe mental disorders in daily clinical practice. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Detection of longitudinal visual field progression in glaucoma using machine learning.

    PubMed

    Yousefi, Siamak; Kiwaki, Taichi; Zheng, Yuhui; Suigara, Hiroki; Asaoka, Ryo; Murata, Hiroshi; Lemij, Hans; Yamanishi, Kenji

    2018-06-16

    Global indices of standard automated perimerty are insensitive to localized losses, while point-wise indices are sensitive but highly variable. Region-wise indices sit in between. This study introduces a machine-learning-based index for glaucoma progression detection that outperforms global, region-wise, and point-wise indices. Development and comparison of a prognostic index. Visual fields from 2085 eyes of 1214 subjects were used to identify glaucoma progression patterns using machine learning. Visual fields from 133 eyes of 71 glaucoma patients were collected 10 times over 10 weeks to provide a no-change, test-retest dataset. The parameters of all methods were identified using visual field sequences in the test-retest dataset to meet fixed 95% specificity. An independent dataset of 270 eyes of 136 glaucoma patients and survival analysis were utilized to compare methods. The time to detect progression in 25% of the eyes in the longitudinal dataset using global mean deviation (MD) was 5.2 years (95% confidence interval, 4.1 - 6.5 years); 4.5 years (4.0 - 5.5) using region-wise, 3.9 years (3.5 - 4.6) using point-wise, and 3.5 years (3.1 - 4.0) using machine learning analysis. The time until 25% of eyes showed subsequently confirmed progression after two additional visits were included were 6.6 years (5.6 - 7.4 years), 5.7 years (4.8 - 6.7), 5.6 years (4.7 - 6.5), and 5.1 years (4.5 - 6.0) for global, region-wise, point-wise, and machine learning analyses, respectively. Machine learning analysis detects progressing eyes earlier than other methods consistently, with or without confirmation visits. In particular, machine learning detects more slowly progressing eyes than other methods. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Wisdom in Context.

    PubMed

    Grossmann, Igor

    2017-03-01

    Philosophers and psychological scientists have converged on the idea that wisdom involves certain aspects of thinking (e.g., intellectual humility, recognition of uncertainty and change), enabling application of knowledge to life challenges. Empirical evidence indicates that people's ability to think wisely varies dramatically across experiential contexts that they encounter over the life span. Moreover, wise thinking varies from one situation to another, with self-focused contexts inhibiting wise thinking. Experiments can show ways to buffer thinking against bias in cases in which self-interests are unavoidable. Specifically, an ego-decentering cognitive mind-set enables wise thinking about personally meaningful issues. It appears that experiential, situational, and cultural factors are even more powerful in shaping wisdom than previously imagined. Focus on such contextual factors sheds new light on the processes underlying wise thought and its development, helps to integrate different approaches to studying wisdom, and has implications for measurement and development of wisdom-enhancing interventions.

  12. Software Project Management and Measurement on the World-Wide-Web (WWW)

    NASA Technical Reports Server (NTRS)

    Callahan, John; Ramakrishnan, Sudhaka

    1996-01-01

    We briefly describe a system for forms-based, work-flow management that helps members of a software development team overcome geographical barriers to collaboration. Our system, called the Web Integrated Software Environment (WISE), is implemented as a World-Wide-Web service that allows for management and measurement of software development projects based on dynamic analysis of change activity in the workflow. WISE tracks issues in a software development process, provides informal communication between the users with different roles, supports to-do lists, and helps in software process improvement. WISE minimizes the time devoted to metrics collection and analysis by providing implicit delivery of messages between users based on the content of project documents. The use of a database in WISE is hidden from the users who view WISE as maintaining a personal 'to-do list' of tasks related to the many projects on which they may play different roles.

  13. Wide-field Infrared Survey Explorer

    NASA Technical Reports Server (NTRS)

    Padgett, Deborah

    2012-01-01

    We present WISE (Wide-field Infrared Survey Explorer) mid-infrared photometry of young stellar object candidates in the Canis Majoris clouds at a distance of 1 kpc. WISE has identified 682 objects with apparent 12 and 22 micron excess emission in a 7 deg x 10 deg field around the CMa Rl cloud . While a substantial fraction of these candidates are likely galaxies, AGB stars, and artifacts from confusion along the galactic plane, others are part of a spectacular cluster of YSOs imaged by WISE along a dark filament in the R1 cloud. Palomar Double Spectrograph observations of several sources in this cluster confirm their identity as young A and B stars with strong emission lines. In this contribution, we plot the optical -mid-infrared spectral energy distribution for the WISE YSO candidates and discuss potential contaminants to the sample . The data demonstrate the utility of WISE in performing wide-area surveys for young stellar objects.

  14. Classification based upon gene expression data: bias and precision of error rates.

    PubMed

    Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L

    2007-06-01

    Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp

  15. Do Errors on Classroom Reading Tasks Slow Growth in Reading? Technical Report No. 404.

    ERIC Educational Resources Information Center

    Anderson, Richard C.; And Others

    A pervasive finding from research on teaching and classroom learning is that a low rate of error on classroom tasks is associated with large year to year gains in achievement, particularly for reading in the primary grades. The finding of a negative relationship between error rate, especially rate of oral reading errors, and gains in reading…

  16. Estimating genotype error rates from high-coverage next-generation sequence data.

    PubMed

    Wall, Jeffrey D; Tang, Ling Fung; Zerbe, Brandon; Kvale, Mark N; Kwok, Pui-Yan; Schaefer, Catherine; Risch, Neil

    2014-11-01

    Exome and whole-genome sequencing studies are becoming increasingly common, but little is known about the accuracy of the genotype calls made by the commonly used platforms. Here we use replicate high-coverage sequencing of blood and saliva DNA samples from four European-American individuals to estimate lower bounds on the error rates of Complete Genomics and Illumina HiSeq whole-genome and whole-exome sequencing. Error rates for nonreference genotype calls range from 0.1% to 0.6%, depending on the platform and the depth of coverage. Additionally, we found (1) no difference in the error profiles or rates between blood and saliva samples; (2) Complete Genomics sequences had substantially higher error rates than Illumina sequences had; (3) error rates were higher (up to 6%) for rare or unique variants; (4) error rates generally declined with genotype quality (GQ) score, but in a nonlinear fashion for the Illumina data, likely due to loss of specificity of GQ scores greater than 60; and (5) error rates increased with increasing depth of coverage for the Illumina data. These findings, especially (3)-(5), suggest that caution should be taken in interpreting the results of next-generation sequencing-based association studies, and even more so in clinical application of this technology in the absence of validation by other more robust sequencing or genotyping methods. © 2014 Wall et al.; Published by Cold Spring Harbor Laboratory Press.

  17. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  18. THE SPITZER-WISE SURVEY OF THE ECLIPTIC POLES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarrett, T. H.; Masci, F.; Cutri, R. M.

    2011-07-10

    We have carried out a survey of the north and south ecliptic poles, EP-N and EP-S, respectively, with the Spitzer Space Telescope and the Wide-field Infrared Survey Explorer (WISE). The primary objective was to cross-calibrate WISE with the Spitzer and Midcourse Space Experiment (MSX) photometric systems by developing a set of calibration stars that are common to these infrared missions. The ecliptic poles were continuous viewing zones for WISE due to its polar-crossing orbit, making these areas ideal for both absolute and internal calibrations. The Spitzer IRAC and MIPS imaging survey covers a complete area of 0.40 deg{sup 2} formore » the EP-N and 1.28 deg{sup 2} for the EP-S. WISE observed the whole sky in four mid-infrared bands, 3.4, 4.6, 12, and 22 {mu}m, during its eight-month cryogenic mission, including several hundred ecliptic polar passages; here we report on the highest coverage depths achieved by WISE, an area of {approx}1.5 deg{sup 2} for both poles. Located close to the center of the EP-N, the Sy-2 galaxy NGC 6552 conveniently functions as a standard calibrator to measure the red response of the 22 {mu}m channel of WISE. Observations from Spitzer-IRAC/MIPS/IRS-LL and WISE show that the galaxy has a strong red color in the mid-infrared due to star-formation and the presence of an active galactic nucleus (AGN), while over a baseline >1 year the mid-IR photometry of NGC 6552 is shown to vary at a level less than 2%. Combining NGC 6552 with the standard calibrator stars, the achieved photometric accuracy of the WISE calibration, relative to the Spitzer and MSX systems, is 2.4%, 2.8%, 4.5%, and 5.7% for W1 (3.4 {mu}m), W2 (4.6 {mu}m), W3 (12 {mu}m), and W4 (22 {mu}m), respectively. The WISE photometry is internally stable to better than 0.1% over the cryogenic lifetime of the mission. The secondary objective of the Spitzer-WISE Survey was to explore the poles at greater flux-level depths, exploiting the higher angular resolution Spitzer observations and the exceptionally deep (in total coverage) WISE observations that potentially reach down to the confusion limit of the survey. The rich Spitzer and WISE data sets were used to study the Galactic and extragalactic populations through source counts, color-magnitude and color-color diagrams. As an example of what the data sets facilitate, we have separated stars from galaxies, delineated normal galaxies from power-law-dominated AGNs, and reported on the different fractions of extragalactic populations. In the EP-N, we find an AGN source density of {approx}260 deg{sup -2} to a 12 {mu}m depth of 115 {mu}Jy, representing 15% of the total extragalactic population to this depth, similar to what has been observed for low-luminosity AGNs in other fields.« less

  19. Computer calculated dose in paediatric prescribing.

    PubMed

    Kirk, Richard C; Li-Meng Goh, Denise; Packia, Jeya; Min Kam, Huey; Ong, Benjamin K C

    2005-01-01

    Medication errors are an important cause of hospital-based morbidity and mortality. However, only a few medication error studies have been conducted in children. These have mainly quantified errors in the inpatient setting; there is very little data available on paediatric outpatient and emergency department medication errors and none on discharge medication. This deficiency is of concern because medication errors are more common in children and it has been suggested that the risk of an adverse drug event as a consequence of a medication error is higher in children than in adults. The aims of this study were to assess the rate of medication errors in predominantly ambulatory paediatric patients and the effect of computer calculated doses on medication error rates of two commonly prescribed drugs. This was a prospective cohort study performed in a paediatric unit in a university teaching hospital between March 2003 and August 2003. The hospital's existing computer clinical decision support system was modified so that doctors could choose the traditional prescription method or the enhanced method of computer calculated dose when prescribing paracetamol (acetaminophen) or promethazine. All prescriptions issued to children (<16 years of age) at the outpatient clinic, emergency department and at discharge from the inpatient service were analysed. A medication error was defined as to have occurred if there was an underdose (below the agreed value), an overdose (above the agreed value), no frequency of administration specified, no dose given or excessive total daily dose. The medication error rates and the factors influencing medication error rates were determined using SPSS version 12. From March to August 2003, 4281 prescriptions were issued. Seven prescriptions (0.16%) were excluded, hence 4274 prescriptions were analysed. Most prescriptions were issued by paediatricians (including neonatologists and paediatric surgeons) and/or junior doctors. The error rate in the children's emergency department was 15.7%, for outpatients was 21.5% and for discharge medication was 23.6%. Most errors were the result of an underdose (64%; 536/833). The computer calculated dose error rate was 12.6% compared with the traditional prescription error rate of 28.2%. Logistical regression analysis showed that computer calculated dose was an important and independent variable influencing the error rate (adjusted relative risk = 0.436, 95% CI 0.336, 0.520, p < 0.001). Other important independent variables were seniority and paediatric training of the person prescribing and the type of drug prescribed. Medication error, especially underdose, is common in outpatient, emergency department and discharge prescriptions. Computer calculated doses can significantly reduce errors, but other risk factors have to be concurrently addressed to achieve maximum benefit.

  20. Angular rate optimal design for the rotary strapdown inertial navigation system.

    PubMed

    Yu, Fei; Sun, Qian

    2014-04-22

    Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS.

  1. Comparison of Meropenem MICs and Susceptibilities for Carbapenemase-Producing Klebsiella pneumoniae Isolates by Various Testing Methods▿

    PubMed Central

    Bulik, Catharine C.; Fauntleroy, Kathy A.; Jenkins, Stephen G.; Abuali, Mayssa; LaBombardi, Vincent J.; Nicolau, David P.; Kuti, Joseph L.

    2010-01-01

    We describe the levels of agreement between broth microdilution, Etest, Vitek 2, Sensititre, and MicroScan methods to accurately define the meropenem MIC and categorical interpretation of susceptibility against carbapenemase-producing Klebsiella pneumoniae (KPC). A total of 46 clinical K. pneumoniae isolates with KPC genotypes, all modified Hodge test and blaKPC positive, collected from two hospitals in NY were included. Results obtained by each method were compared with those from broth microdilution (the reference method), and agreement was assessed based on MICs and Clinical Laboratory Standards Institute (CLSI) interpretative criteria using 2010 susceptibility breakpoints. Based on broth microdilution, 0%, 2.2%, and 97.8% of the KPC isolates were classified as susceptible, intermediate, and resistant to meropenem, respectively. Results from MicroScan demonstrated the most agreement with those from broth microdilution, with 95.6% agreement based on the MIC and 2.2% classified as minor errors, and no major or very major errors. Etest demonstrated 82.6% agreement with broth microdilution MICs, a very major error rate of 2.2%, and a minor error rate of 2.2%. Vitek 2 MIC agreement was 30.4%, with a 23.9% very major error rate and a 39.1% minor error rate. Sensititre demonstrated MIC agreement for 26.1% of isolates, with a 3% very major error rate and a 26.1% minor error rate. Application of FDA breakpoints had little effect on minor error rates but increased very major error rates to 58.7% for Vitek 2 and Sensititre. Meropenem MIC results and categorical interpretations for carbapenemase-producing K. pneumoniae differ by methodology. Confirmation of testing results is encouraged when an accurate MIC is required for antibiotic dosing optimization. PMID:20484603

  2. Assessing High Order Thinking of Students Participating in the "WISE" Project in Israel.

    ERIC Educational Resources Information Center

    Tal, Revital; Hochberg, Nurit

    2003-01-01

    Studied the higher order thinking of 53 Israeli ninth graders in 3 schools using the Web-Based Inquiry Science Environment (WISE) learning environment to study about malaria. Findings show that all students used higher order thinking skills and that their English was good enough to use the WISE learning environment in the Israeli setting. (SLD)

  3. A survey of ground operations tools developed to simulate the pointing of space telescopes and the design for WISE

    NASA Technical Reports Server (NTRS)

    Fabinsky, Beth

    2006-01-01

    WISE, the Wide Field Infrared Survey Explorer, is scheduled for launch in June 2010. The mission operations system for WISE requires a software modeling tool to help plan, integrate and simulate all spacecraft pointing and verify that no attitude constraints are violated. In the course of developing the requirements for this tool, an investigation was conducted into the design of similar tools for other space-based telescopes. This paper summarizes the ground software and processes used to plan and validate pointing for a selection of space telescopes; with this information as background, the design for WISE is presented.

  4. Investigation of deformation mechanisms of staggered nanocomposites using molecular dynamics

    NASA Astrophysics Data System (ADS)

    Mathiazhagan, S.; Anup, S.

    2016-08-01

    Biological materials with nanostructure of regularly or stair-wise staggered arrangements of hard platelets reinforced in a soft protein matrix have superior mechanical properties. Applications of these nanostructures to ceramic matrix composites could enhance their toughness. Using molecular dynamics simulations, mechanical behaviour of the bio-inspired nanocomposites is studied. Regularly staggered model shows better flow behaviour compared to stair-wise staggered model due to the symmetrical crack propagation along the interface. Though higher stiffness and strength are obtained for stair-wise staggered models, rapid crack propagation reduces the toughness. Arresting this crack propagation could lead to superior mechanical properties in stair-wise staggered models.

  5. Is there a step-wise migration in Nigeria? A case study of the migrational histories of migrants in Lagos.

    PubMed

    Afolayan, A A

    1985-09-01

    "The paper sets out to test whether or not the movement pattern of people in Nigeria is step-wise. It examines the spatial order in the country and the movement pattern of people. It then analyzes the survey data and tests for the validity of step-wise migration in the country. The findings show that step-wise migration cannot adequately describe all the patterns observed." The presence of large-scale circulatory migration between rural and urban areas is noted. Ways to decrease the pressure on Lagos by developing intermediate urban areas are considered. excerpt

  6. WISE and the Dusty Universe

    NASA Technical Reports Server (NTRS)

    Benford, Dominic J.

    2010-01-01

    The Wide-field Infrared Survey is a medium class Explorer mission that was launched onl4Dec 2009. WISE should detect hundreds of millions of stars and galaxies, including millions of ULIRGS and QSOs; hundreds of thousands of asteroids; and hundreds of cold brown dwarfs. The telescope cover was ejected on 29 Dec 2009 and the all-sky survey started on 14 Jan 2010. WISE takes more the 7000 framesets per day, with each frameset covering 0.6 square degrees in four bands centered at 3.4, 4.6, 12 and 22 microns. WISE is well-suited to the discovery of brown dwarfs, ultraluminous infrared galaxies, and near-Earth objects. With an angular resolution of 6 arcsecouds at 12 microns, a 5(sigma) point-source sensitivity of around 1 mJy at 12 microns and 6 mJy at 22 microns, and coverage of over 99% of the sky, WISE also provides a powerful database for the study of the dusty ISM in our own galaxy. A preliminary release of WISE data will be made available to the community 6 months after the end of the cryogenic survey, or about May 2011. The final data release will be 11 months later, about April 2012.

  7. The First Brown Dwarf Discovered by the Backyard Worlds: Planet 9 Citizen Science Project

    NASA Technical Reports Server (NTRS)

    Kuchner, Marc J.; Faherty, Jacqueline K.; Schneider, Adam C.; Meisner, Aaron M.; Filippazzo, Joseph C.; Gagne, Jonathan; Trouille, Laura; Silverberg, Steven M.; Castro, Rosa; Fletcher, Bob; hide

    2017-01-01

    The Wide-field Infrared Survey Explorer (WISE) is a powerful tool for finding nearby brown dwarfs and searching for new planets in the outer solar system, especially with the incorporation of NEOWISE and NEOWISE Reactivation data. However, so far, searches for brown dwarfs in WISE data have yet to take advantage of the full depth of the WISE images. To efficiently search this unexplored space via visual inspection, we have launched anew citizen science project, called "Backyard Worlds: Planet 9," which asks volunteers to examine short animations composed of difference images constructed from time-resolved WISE co adds. We report the first new substellar object discovered by this project, WISEA J110125.95+540052.8, a T5.5 brown dwarf located approximately 34 pc from the Sun with a total proper motion of approx.0. "7/ yr. WISEA J110125.95+540052.8 has a WISE W2 magnitude of W2 = 15.37+/- 0.09; our sensitivity to this source demonstrates the ability of citizen scientists to identify moving objects via visual inspection that are 0.9 mag fainter than the W2 single-exposure sensitivity, a threshold that has limited prior motion-based brown dwarf searches with WISE.

  8. The Aging Well through Interaction and Scientific Education (AgeWISE) Program.

    PubMed

    O'Connor, Maureen K; Kraft, Malissa L; Daley, Ryan; Sugarman, Michael A; Clark, Erika L; Scoglio, Arielle A J; Shirk, Steven D

    2017-12-08

    We conducted a randomized controlled trial of the Aging Well through Interaction and Scientific Education (AgeWISE) program, a 12-week manualized cognitive rehabilitation program designed to provide psychoeducation to older adults about the aging brain, lifestyle factors associated with successful brain aging, and strategies to compensate for age related cognitive decline. Forty-nine cognitively intact participants ≥ 60 years old were randomly assigned to the AgeWISE program (n = 25) or a no-treatment control group (n = 24). Questionnaire data were collected prior to group assignment and post intervention. Two-factor repeated-measures analyses of covariance (ANCOVAs) were used to compare group outcomes. Upon completion, participants in the AgeWISE program reported increases in memory contentment and their sense of control in improving memory; no significant changes were observed in the control group. Surprisingly, participation in the group was not associated with significant changes in knowledge of memory aging, perception of memory ability, or greater use of strategies. The AgeWISE program was successfully implemented and increased participants' memory contentment and their sense of control in improving memory in advancing age. This study supports the use of AgeWISE to improve perspectives on healthy cognitive aging.

  9. Comparison of estimators of standard deviation for hydrologic time series

    USGS Publications Warehouse

    Tasker, Gary D.; Gilroy, Edward J.

    1982-01-01

    Unbiasing factors as a function of serial correlation, ρ, and sample size, n for the sample standard deviation of a lag one autoregressive model were generated by random number simulation. Monte Carlo experiments were used to compare the performance of several alternative methods for estimating the standard deviation σ of a lag one autoregressive model in terms of bias, root mean square error, probability of underestimation, and expected opportunity design loss. Three methods provided estimates of σ which were much less biased but had greater mean square errors than the usual estimate of σ: s = (1/(n - 1) ∑ (xi −x¯)2)½. The three methods may be briefly characterized as (1) a method using a maximum likelihood estimate of the unbiasing factor, (2) a method using an empirical Bayes estimate of the unbiasing factor, and (3) a robust nonparametric estimate of σ suggested by Quenouille. Because s tends to underestimate σ, its use as an estimate of a model parameter results in a tendency to underdesign. If underdesign losses are considered more serious than overdesign losses, then the choice of one of the less biased methods may be wise.

  10. Group-wise feature-based registration of CT and ultrasound images of spine

    NASA Astrophysics Data System (ADS)

    Rasoulian, Abtin; Mousavi, Parvin; Hedjazi Moghari, Mehdi; Foroughi, Pezhman; Abolmaesumi, Purang

    2010-02-01

    Registration of pre-operative CT and freehand intra-operative ultrasound of lumbar spine could aid surgeons in the spinal needle injection which is a common procedure for pain management. Patients are always in a supine position during the CT scan, and in the prone or sitting position during the intervention. This leads to a difference in the spinal curvature between the two imaging modalities, which means a single rigid registration cannot be used for all of the lumbar vertebrae. In this work, a method for group-wise registration of pre-operative CT and intra-operative freehand 2-D ultrasound images of the lumbar spine is presented. The approach utilizes a pointbased registration technique based on the unscented Kalman filter, taking as input segmented vertebrae surfaces in both CT and ultrasound data. Ultrasound images are automatically segmented using a dynamic programming approach, while the CT images are semi-automatically segmented using thresholding. Since the curvature of the spine is different between the pre-operative and the intra-operative data, the registration approach is designed to simultaneously align individual groups of points segmented from each vertebra in the two imaging modalities. A biomechanical model is used to constrain the vertebrae transformation parameters during the registration and to ensure convergence. The mean target registration error achieved for individual vertebrae on five spine phantoms generated from CT data of patients, is 2.47 mm with standard deviation of 1.14 mm.

  11. Voxel-wise prostate cell density prediction using multiparametric magnetic resonance imaging and machine learning.

    PubMed

    Sun, Yu; Reynolds, Hayley M; Wraith, Darren; Williams, Scott; Finnegan, Mary E; Mitchell, Catherine; Murphy, Declan; Haworth, Annette

    2018-04-26

    There are currently no methods to estimate cell density in the prostate. This study aimed to develop predictive models to estimate prostate cell density from multiparametric magnetic resonance imaging (mpMRI) data at a voxel level using machine learning techniques. In vivo mpMRI data were collected from 30 patients before radical prostatectomy. Sequences included T2-weighted imaging, diffusion-weighted imaging and dynamic contrast-enhanced imaging. Ground truth cell density maps were computed from histology and co-registered with mpMRI. Feature extraction and selection were performed on mpMRI data. Final models were fitted using three regression algorithms including multivariate adaptive regression spline (MARS), polynomial regression (PR) and generalised additive model (GAM). Model parameters were optimised using leave-one-out cross-validation on the training data and model performance was evaluated on test data using root mean square error (RMSE) measurements. Predictive models to estimate voxel-wise prostate cell density were successfully trained and tested using the three algorithms. The best model (GAM) achieved a RMSE of 1.06 (± 0.06) × 10 3 cells/mm 2 and a relative deviation of 13.3 ± 0.8%. Prostate cell density can be quantitatively estimated non-invasively from mpMRI data using high-quality co-registered data at a voxel level. These cell density predictions could be used for tissue classification, treatment response evaluation and personalised radiotherapy.

  12. Do Future Teachers Choose Wisely? A Study of Pre-Service Teachers' Personality Preference Profiles

    ERIC Educational Resources Information Center

    Thornton, Bill; Peltier, Gary; Hill, Gus

    2005-01-01

    The No Child Left Behind Act requires that all teachers in core academic subjects to be "highly qualified" by the end of the 2005-06 school year. New teacher leave the profession at an alarming rate--research indicates that 50% have left within five years of their first job. This article explores the personality types of pre-service…

  13. The effectiveness of the error reporting promoting program on the nursing error incidence rate in Korean operating rooms.

    PubMed

    Kim, Myoung-Soo; Kim, Jung-Soon; Jung, In Sook; Kim, Young Hae; Kim, Ho Jung

    2007-03-01

    The purpose of this study was to develop and evaluate an error reporting promoting program(ERPP) to systematically reduce the incidence rate of nursing errors in operating room. A non-equivalent control group non-synchronized design was used. Twenty-six operating room nurses who were in one university hospital in Busan participated in this study. They were stratified into four groups according to their operating room experience and were allocated to the experimental and control groups using a matching method. Mann-Whitney U Test was used to analyze the differences pre and post incidence rates of nursing errors between the two groups. The incidence rate of nursing errors decreased significantly in the experimental group compared to the pre-test score from 28.4% to 15.7%. The incidence rate by domains, it decreased significantly in the 3 domains-"compliance of aseptic technique", "management of document", "environmental management" in the experimental group while it decreased in the control group which was applied ordinary error-reporting method. Error-reporting system can make possible to hold the errors in common and to learn from them. ERPP was effective to reduce the errors of recognition-related nursing activities. For the wake of more effective error-prevention, we will be better to apply effort of risk management along the whole health care system with this program.

  14. Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection

    PubMed Central

    Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J

    2017-01-01

    Background The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. Objective We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term “validation relaxation.” Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. Methods We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of “required” constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. Results The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. Conclusions A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. PMID:28821474

  15. Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection.

    PubMed

    Kenny, Avi; Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J

    2017-08-18

    The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term "validation relaxation." Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of "required" constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. ©Avi Kenny, Nicholas Gordon, Thomas Griffiths, John D Kraemer, Mark J Siedner. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 18.08.2017.

  16. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 1; Improved Method and Uncertainties

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.; hide

    2006-01-01

    A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.

  17. unWISE: Unblurred Coadds of the WISE Imaging

    NASA Astrophysics Data System (ADS)

    Lang, Dustin

    2014-05-01

    The Wide-field Infrared Survey Explorer (WISE) satellite observed the full sky in four mid-infrared bands in the 2.8-28 μm range. The primary mission was completed in 2010. The WISE team has done a superb job of producing a series of high-quality, well-documented, complete data releases in a timely manner. However, the "Atlas Image" coadds that are part of the recent AllWISE and previous data releases were intentionally blurred. Convolving the images by the point-spread function while coadding results in "matched-filtered" images that are close to optimal for detecting isolated point sources. But these matched-filtered images are sub-optimal or inappropriate for other purposes. For example, we are photometering the WISE images at the locations of sources detected in the Sloan Digital Sky Survey through forward modeling, and this blurring decreases the available signal-to-noise by effectively broadening the point-spread function. This paper presents a new set of coadds of the WISE images that have not been blurred. These images retain the intrinsic resolution of the data and are appropriate for photometry preserving the available signal-to-noise. Users should be cautioned, however, that the W3- and W4-band coadds contain artifacts around large, bright structures (large galaxies, dusty nebulae, etc.); eliminating these artifacts is the subject of ongoing work. These new coadds, and the code used to produce them, are publicly available at http://unwise.me.

  18. Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.

    NASA Astrophysics Data System (ADS)

    Elliott, William Dewey

    1995-01-01

    A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over several simulation timesteps. One MD application described here highlights the utility of including long range contributions to Lennard-Jones potential in constant pressure simulations. Another application shows the time dependence of long range forces in a multiple time step MD simulation.

  19. F157. HIERARCHICAL PREDICTION ERRORS DURING AUDITORY MISMATCH UNDER PHARMACOLOGICAL MANIPULATIONS: A COMPUTATIONAL SINGLE-TRIAL EEG ANALYSIS

    PubMed Central

    Weber, Lilian; Diaconescu, Andreea; Tomiello, Sara; Schöbi, Dario; Iglesias, Sandra; Mathys, Christoph; Haker, Helene; Stefanics, Gabor; Schmidt, André; Kometer, Michael; Vollenweider, Franz X; Stephan, Klaas Enno

    2018-01-01

    Abstract Background A central theme of contemporary neuroscience is the notion that the brain embodies a generative model of its sensory inputs to infer on the underlying environmental causes, and that it uses hierarchical prediction errors (PEs) to continuously update this model. In two pharmacological EEG studies, we investigate trial-wise hierarchical PEs during the auditory mismatch negativity (MMN), an electrophysiological response to unexpected events, which depends on NMDA-receptor mediated plasticity and has repeatedly been shown to be reduced in schizophrenia. Methods Study1: Reanalysis of 64 channel EEG data from a previously published MMN study (Schmidt et al., 2012) using a placebo-controlled, within-subject design (N=19) to examine the effect of S-ketamine. Study2: 64 channel EEG data recorded during MMN (between subjects, double-blind, placebo-controlled design, N=73), to examine the effects of amisulpride and biperiden. Using the Hierarchical Gaussian Filter, a Bayesian learning model, we extracted trial-by-trial PE estimates on two hierarchical levels. These served as regressors in a GLM of trial-wise EEG signals at the sensor level. Results We find strong correlations of EEG with both PEs in both samples: lower-level PEs show effects early on (Study1: 133ms post-stimulus, Study2: 177ms), higher-level PEs later (Study1: 240ms, Study2: 450ms). The temporal order of these signatures thus mimics the hierarchical relationship of the PEs, as proposed by our computational model, where lower level beliefs need to be updated before learning can ensue on higher levels. Ketamine significantly reduced the representation of the higher-level PE in Study1. (Study2 has not been unblinded.) Discussion These studies present first evidence for hierarchical PEs during MMN and demonstrate that single-trial analyses guided by a computational model can distinguish different types (levels) of PEs, which are differentially linked to neuromodulators of demonstrated relevance for schizophrenia. Our analysis approach thus provides better mechanistic interpretability of pharmacological MMN studies, which will hopefully support the development of computational assays for diagnosis and treatment predictions in schizophrenia.

  20. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm.

    PubMed

    Pizarro, Ricardo A; Cheng, Xi; Barnett, Alan; Lemaitre, Herve; Verchinski, Beth A; Goldman, Aaron L; Xiao, Ena; Luo, Qian; Berman, Karen F; Callicott, Joseph H; Weinberger, Daniel R; Mattay, Venkata S

    2016-01-01

    High-resolution three-dimensional magnetic resonance imaging (3D-MRI) is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM) algorithm in the quality assessment of structural brain images, using global and region of interest (ROI) automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy) of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.

  1. The 2-degree Field Lensing Survey: photometric redshifts from a large new training sample to r < 19.5

    NASA Astrophysics Data System (ADS)

    Wolf, C.; Johnson, A. S.; Bilicki, M.; Blake, C.; Amon, A.; Erben, T.; Glazebrook, K.; Heymans, C.; Hildebrandt, H.; Joudaki, S.; Klaes, D.; Kuijken, K.; Lidman, C.; Marin, F.; Parkinson, D.; Poole, G.

    2017-04-01

    We present a new training set for estimating empirical photometric redshifts of galaxies, which was created as part of the 2-degree Field Lensing Survey project. This training set is located in a ˜700 deg2 area of the Kilo-Degree-Survey South field and is randomly selected and nearly complete at r < 19.5. We investigate the photometric redshift performance obtained with ugriz photometry from VST-ATLAS and W1/W2 from WISE, based on several empirical and template methods. The best redshift errors are obtained with kernel-density estimation (KDE), as are the lowest biases, which are consistent with zero within statistical noise. The 68th percentiles of the redshift scatter for magnitude-limited samples at r < (15.5, 17.5, 19.5) are (0.014, 0.017, 0.028). In this magnitude range, there are no known ambiguities in the colour-redshift map, consistent with a small rate of redshift outliers. In the fainter regime, the KDE method produces p(z) estimates per galaxy that represent unbiased and accurate redshift frequency expectations. The p(z) sum over any subsample is consistent with the true redshift frequency plus Poisson noise. Further improvements in redshift precision at r < 20 would mostly be expected from filter sets with narrower passbands to increase the sensitivity of colours to small changes in redshift.

  2. Morphology control in polymer blend fibers—a high throughput computing approach

    NASA Astrophysics Data System (ADS)

    Sesha Sarath Pokuri, Balaji; Ganapathysubramanian, Baskar

    2016-08-01

    Fibers made from polymer blends have conventionally enjoyed wide use, particularly in textiles. This wide applicability is primarily aided by the ease of manufacturing such fibers. More recently, the ability to tailor the internal morphology of polymer blend fibers by carefully designing processing conditions has enabled such fibers to be used in technologically relevant applications. Some examples include anisotropic insulating properties for heat and anisotropic wicking of moisture, coaxial morphologies for optical applications as well as fibers with high internal surface area for filtration and catalysis applications. However, identifying the appropriate processing conditions from the large space of possibilities using conventional trial-and-error approaches is a tedious and resource-intensive process. Here, we illustrate a high throughput computational approach to rapidly explore and characterize how processing conditions (specifically blend ratio and evaporation rates) affect the internal morphology of polymer blends during solvent based fabrication. We focus on a PS: PMMA system and identify two distinct classes of morphologies formed due to variations in the processing conditions. We subsequently map the processing conditions to the morphology class, thus constructing a ‘phase diagram’ that enables rapid identification of processing parameters for specific morphology class. We finally demonstrate the potential for time dependent processing conditions to get desired features of the morphology. This opens up the possibility of rational stage-wise design of processing pathways for tailored fiber morphology using high throughput computing.

  3. An error criterion for determining sampling rates in closed-loop control systems

    NASA Technical Reports Server (NTRS)

    Brecher, S. M.

    1972-01-01

    The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.

  4. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system.

    PubMed

    Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O

    2015-02-01

    To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care.

  5. Experimental investigation of false positive errors in auditory species occurrence surveys

    USGS Publications Warehouse

    Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.

    2012-01-01

    False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.

  6. Two Stochastic Phases of Tick-wise Price Fluctuation and the Price Prediction Generator

    NASA Astrophysics Data System (ADS)

    Tanaka-Yamawaki, Mieko; Tokuoka, Seiji

    2007-07-01

    We report in this paper the existence of two different stochastic phases in the tick-wise price fluctuations. Based on this observation, we improve our old method of developing the evolutional strategy to predict the direction of the tick-wise price movements. We obtain a stable predictive power even in the region where the old method had a difficulty.

  7. Sun Savvy Students: Free Teaching Resources from EPA's SunWise Program

    ERIC Educational Resources Information Center

    Hall-Jordan, Luke

    2008-01-01

    With summer in full swing and the sun is naturally on our minds, what better time to take advantage of a host of free materials provided by the U.S. Environmental Protection Agency's Sun Wise program. Sun Wise aims to teach students and teachers about the stratospheric ozone layer, ultraviolet (UV) radiation, and how to be safe while in the Sun.…

  8. The Waste Wise Schools Program: Evidence of Educational, Environmental, Social and Economic Outcomes at the School and Community Level

    ERIC Educational Resources Information Center

    Armstrong, Patricia; Sharpley, Brian; Malcolm, Stephen

    2004-01-01

    The Waste Wise Schools Program was established by EcoRecycle Victoria to implement waste and litter education in Victorian schools. It is now operating in over 900 schools in Victoria and 300 schools in other Australian states / territories. This paper provides detailed case studies of two active schools in the Waste Wise Schools Program and…

  9. Technological Advancements and Error Rates in Radiation Therapy Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margalit, Danielle N., E-mail: dmargalit@partners.org; Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA; Chen, Yu-Hui

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system atmore » Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.« less

  10. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGES

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  11. Implementation of bayesian model averaging on the weather data forecasting applications utilizing open weather map

    NASA Astrophysics Data System (ADS)

    Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.

    2018-02-01

    Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.

  12. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  13. Estimating Rain Rates from Tipping-Bucket Rain Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Fisher, Brad L.; Wolff, David B.

    2007-01-01

    This paper describes the cubic spline based operational system for the generation of the TRMM one-minute rain rate product 2A-56 from Tipping Bucket (TB) gauge measurements. Methodological issues associated with applying the cubic spline to the TB gauge rain rate estimation are closely examined. A simulated TB gauge from a Joss-Waldvogel (JW) disdrometer is employed to evaluate effects of time scales and rain event definitions on errors of the rain rate estimation. The comparison between rain rates measured from the JW disdrometer and those estimated from the simulated TB gauge shows good overall agreement; however, the TB gauge suffers sampling problems, resulting in errors in the rain rate estimation. These errors are very sensitive to the time scale of rain rates. One-minute rain rates suffer substantial errors, especially at low rain rates. When one minute rain rates are averaged to 4-7 minute or longer time scales, the errors dramatically reduce. The rain event duration is very sensitive to the event definition but the event rain total is rather insensitive, provided that the events with less than 1 millimeter rain totals are excluded. Estimated lower rain rates are sensitive to the event definition whereas the higher rates are not. The median relative absolute errors are about 22% and 32% for 1-minute TB rain rates higher and lower than 3 mm per hour, respectively. These errors decrease to 5% and 14% when TB rain rates are used at 7-minute scale. The radar reflectivity-rainrate (Ze-R) distributions drawn from large amount of 7-minute TB rain rates and radar reflectivity data are mostly insensitive to the event definition.

  14. Approximation of Bit Error Rates in Digital Communications

    DTIC Science & Technology

    2007-06-01

    and Technology Organisation DSTO—TN—0761 ABSTRACT This report investigates the estimation of bit error rates in digital communi- cations, motivated by...recent work in [6]. In the latter, bounds are used to construct estimates for bit error rates in the case of differentially coherent quadrature phase

  15. Optimum data analysis procedures for Titan 4 and Space Shuttle payload acoustic measurements during lift-off

    NASA Technical Reports Server (NTRS)

    Piersol, Allan G.

    1991-01-01

    Analytical expressions have been derived to describe the mean square error in the estimation of the maximum rms value computed from a step-wise (or running) time average of a nonstationary random signal. These analytical expressions have been applied to the problem of selecting the optimum averaging times that will minimize the total mean square errors in estimates of the maximum sound pressure levels measured inside the Titan IV payload fairing (PLF) and the Space Shuttle payload bay (PLB) during lift-off. Based on evaluations of typical Titan IV and Space Shuttle launch data, it has been determined that the optimum averaging times for computing the maximum levels are (1) T (sub o) = 1.14 sec for the maximum overall level, and T(sub oi) = 4.88 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Titan IV PLF, and (2) T (sub o) = 1.65 sec for the maximum overall level, and T (sub oi) = 7.10 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Space Shuttle PLB, where f (sub i) is the 1/3 octave band center frequency. However, the results for both vehicles indicate that the total rms error in the maximum level estimates will be within 25 percent the minimum error for all averaging times within plus or minus 50 percent of the optimum averaging time, so a precise selection of the exact optimum averaging time is not critical. Based on these results, linear averaging times (T) are recommended for computing the maximum sound pressure level during lift-off.

  16. Synergy of WISE and SDSS in Stripe 82

    NASA Astrophysics Data System (ADS)

    Musin, Marat; Yan, Haojing

    2018-06-01

    We report the current results from our effort to synergize WISE and SDSS in the ~ 300 sq. degree Stripe 82 region. Using the SDSS images as the prior, we fit the SDSS-detected objects to the WISE W1/W2 images to obtain consistent optical-to-IR SEDs. The major outcome will consist of two catalogs: (1) one is the "SDSS-WISE" photometric catalog on ~ 22 million SDSS-detected sources, and (2) the other one is the "WoDrop" catalog that are optical-dropouts detected on the residual W1/W2 images that do not have SDSS counterparts. The applications and the implications of our results will be briefly discussed.

  17. GridWise Standards Mapping Overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosquet, Mia L.

    ''GridWise'' is a concept of how advanced communications, information and controls technology can transform the nation's energy system--across the spectrum of large scale, central generation to common consumer appliances and equipment--into a collaborative network, rich in the exchange of decision making information and an abundance of market-based opportunities (Widergren and Bosquet 2003) accompanying the electric transmission and distribution system fully into the information and telecommunication age. This report summarizes a broad review of standards efforts which are related to GridWise--those which could ultimately contribute significantly to advancements toward the GridWise vision, or those which represent today's current technological basis uponmore » which this vision must build.« less

  18. Regional Gray Matter Volumes Are Related to Concern About Falling in Older People: A Voxel-Based Morphometric Study.

    PubMed

    Tuerk, Carola; Zhang, Haobo; Sachdev, Perminder; Lord, Stephen R; Brodaty, Henry; Wen, Wei; Delbaere, Kim

    2016-01-01

    Concern about falling is common in older people. Various related psychological constructs as well as poor balance and slow gait have been associated with decreased gray matter (GM) volume in old age. The current study investigates the association between concern about falling and voxel-wise GM volumes. A total of 281 community-dwelling older people aged 70-90 years underwent structural magnetic resonance imaging. Concern about falling was assessed using Falls Efficacy Scale-International (FES-I). For each participant, voxel-wise GM volumes were generated with voxel-based morphometry and regressed on raw FES-I scores (p < .05 family-wise error corrected on cluster level). FES-I scores were negatively correlated with total brain volume (r = -.212; p ≤ .001), GM volume (r = -.210; p ≤ .001), and white matter volume (r = -.155; p ≤ .001). Voxel-based morphometry analysis revealed significant negative associations between FES-I and GM volumes of (i) left cerebellum and bilateral inferior occipital gyrus (voxels-in-cluster = 2,981; p < .001) and (ii) bilateral superior frontal gyrus and left supplementary motor area (voxels-in-cluster = 1,900; p = .004). Additional adjustment for vision and physical fall risk did not alter these associations. After adjustment for anxiety, only left cerebellum and bilateral inferior occipital gyrus remained negatively associated with FES-I scores (voxels-in-cluster = 2,426; p < .001). Adjustment for neuroticism removed all associations between FES-I and GM volumes. Our study findings show that concern about falling is negatively associated with brain volumes in areas important for emotional control and for motor control, executive functions and visual processing in a large sample of older men and women. Regression analyses suggest that these relationships were primarily accounted for by psychological factors (generalized anxiety and neuroticism) and not by physical fall risk or vision. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Analysis of the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery☆

    PubMed Central

    Arba-Mosquera, Samuel; Aslanides, Ioannis M.

    2012-01-01

    Purpose To analyze the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery. Methods A comprehensive model, which directly considers eye movements, including saccades, vestibular, optokinetic, vergence, and miniature, as well as, eye-tracker acquisition rate, eye-tracker latency time, scanner positioning time, laser firing rate, and laser trigger delay have been developed. Results Eye-tracker acquisition rates below 100 Hz correspond to pulse positioning errors above 1.5 mm. Eye-tracker latency times to about 15 ms correspond to pulse positioning errors of up to 3.5 mm. Scanner positioning times to about 9 ms correspond to pulse positioning errors of up to 2 mm. Laser firing rates faster than eye-tracker acquisition rates basically duplicate pulse-positioning errors. Laser trigger delays to about 300 μs have minor to no impact on pulse-positioning errors. Conclusions The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.

  20. Failure analysis and modeling of a multicomputer system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Subramani, Sujatha Srinivasan

    1990-01-01

    This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).

  1. A tightly-coupled domain-decomposition approach for highly nonlinear stochastic multiphysics systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taverniers, Søren; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu

    2017-02-01

    Multiphysics simulations often involve nonlinear components that are driven by internally generated or externally imposed random fluctuations. When used with a domain-decomposition (DD) algorithm, such components have to be coupled in a way that both accurately propagates the noise between the subdomains and lends itself to a stable and cost-effective temporal integration. We develop a conservative DD approach in which tight coupling is obtained by using a Jacobian-free Newton–Krylov (JfNK) method with a generalized minimum residual iterative linear solver. This strategy is tested on a coupled nonlinear diffusion system forced by a truncated Gaussian noise at the boundary. Enforcement ofmore » path-wise continuity of the state variable and its flux, as opposed to continuity in the mean, at interfaces between subdomains enables the DD algorithm to correctly propagate boundary fluctuations throughout the computational domain. Reliance on a single Newton iteration (explicit coupling), rather than on the fully converged JfNK (implicit) coupling, may increase the solution error by an order of magnitude. Increase in communication frequency between the DD components reduces the explicit coupling's error, but makes it less efficient than the implicit coupling at comparable error levels for all noise strengths considered. Finally, the DD algorithm with the implicit JfNK coupling resolves temporally-correlated fluctuations of the boundary noise when the correlation time of the latter exceeds some multiple of an appropriately defined characteristic diffusion time.« less

  2. Error estimates for (semi-)empirical dispersion terms and large biomacromolecules.

    PubMed

    Korth, Martin

    2013-10-14

    The first-principles modeling of biomaterials has made tremendous advances over the last few years with the ongoing growth of computing power and impressive developments in the application of density functional theory (DFT) codes to large systems. One important step forward was the development of dispersion corrections for DFT methods, which account for the otherwise neglected dispersive van der Waals (vdW) interactions. Approaches at different levels of theory exist, with the most often used (semi-)empirical ones based on pair-wise interatomic C6R(-6) terms. Similar terms are now also used in connection with semiempirical QM (SQM) methods and density functional tight binding methods (SCC-DFTB). Their basic structure equals the attractive term in Lennard-Jones potentials, common to most force field approaches, but they usually use some type of cutoff function to make the mixing of the (long-range) dispersion term with the already existing (short-range) dispersion and exchange-repulsion effects from the electronic structure theory methods possible. All these dispersion approximations were found to perform accurately for smaller systems, but error estimates for larger systems are very rare and completely missing for really large biomolecules. We derive such estimates for the dispersion terms of DFT, SQM and MM methods using error statistics for smaller systems and dispersion contribution estimates for the PDBbind database of protein-ligand interactions. We find that dispersion terms will usually not be a limiting factor for reaching chemical accuracy, though some force fields and large ligand sizes are problematic.

  3. A Route to Well-being: Intelligence vs. Wise Reasoning

    PubMed Central

    Grossmann, Igor; Na, Jinkyung; Varnum, Michael E.W.; Kitayama, Shinobu; Nisbett, Richard E.

    2012-01-01

    Laypeople and many social scientists assume that superior reasoning abilities lead to greater well-being. However, previous research has been inconclusive. This may be because prior investigators used operationalizations of reasoning that favored analytic as opposed to wise thinking. We assessed wisdom in terms of the degree to which people use various pragmatic schemas to deal with social conflicts. With a random sample of Americans we found that wise reasoning is associated with greater life satisfaction, less negative affect, better social relationships, less depressive rumination, more positive vs. negative words used in speech, and greater longevity. The relationship between wise reasoning and well-being held even when controlling for socio-economic factors, verbal abilities, and several personality traits. As in prior work there was no association between intelligence and well-being. Further, wise reasoning mediated age-related differences in well-being, particularly among the middle-aged and older adults. Implications for research on reasoning, well-being and aging are discussed. PMID:22866683

  4. Pair-Wise Trajectory Management-Oceanic (PTM-O) . [Concept of Operations—Version 3.9

    NASA Technical Reports Server (NTRS)

    Jones, Kenneth M.

    2014-01-01

    This document describes the Pair-wise Trajectory Management-Oceanic (PTM-O) Concept of Operations (ConOps). Pair-wise Trajectory Management (PTM) is a concept that includes airborne and ground-based capabilities designed to enable and to benefit from, airborne pair-wise distance-monitoring capability. PTM includes the capabilities needed for the controller to issue a PTM clearance that resolves a conflict for a specific pair of aircraft. PTM avionics include the capabilities needed for the flight crew to manage their trajectory relative to specific designated aircraft. Pair-wise Trajectory Management PTM-Oceanic (PTM-O) is a regional specific application of the PTM concept. PTM is sponsored by the National Aeronautics and Space Administration (NASA) Concept and Technology Development Project (part of NASA's Airspace Systems Program). The goal of PTM is to use enhanced and distributed communications and surveillance along with airborne tools to permit reduced separation standards for given aircraft pairs, thereby increasing the capacity and efficiency of aircraft operations at a given altitude or volume of airspace.

  5. Angular Rate Optimal Design for the Rotary Strapdown Inertial Navigation System

    PubMed Central

    Yu, Fei; Sun, Qian

    2014-01-01

    Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS. PMID:24759115

  6. Reverse Transcription Errors and RNA-DNA Differences at Short Tandem Repeats.

    PubMed

    Fungtammasan, Arkarachai; Tomaszkiewicz, Marta; Campos-Sánchez, Rebeca; Eckert, Kristin A; DeGiorgio, Michael; Makova, Kateryna D

    2016-10-01

    Transcript variation has important implications for organismal function in health and disease. Most transcriptome studies focus on assessing variation in gene expression levels and isoform representation. Variation at the level of transcript sequence is caused by RNA editing and transcription errors, and leads to nongenetically encoded transcript variants, or RNA-DNA differences (RDDs). Such variation has been understudied, in part because its detection is obscured by reverse transcription (RT) and sequencing errors. It has only been evaluated for intertranscript base substitution differences. Here, we investigated transcript sequence variation for short tandem repeats (STRs). We developed the first maximum-likelihood estimator (MLE) to infer RT error and RDD rates, taking next generation sequencing error rates into account. Using the MLE, we empirically evaluated RT error and RDD rates for STRs in a large-scale DNA and RNA replicated sequencing experiment conducted in a primate species. The RT error rates increased exponentially with STR length and were biased toward expansions. The RDD rates were approximately 1 order of magnitude lower than the RT error rates. The RT error rates estimated with the MLE from a primate data set were concordant with those estimated with an independent method, barcoded RNA sequencing, from a Caenorhabditis elegans data set. Our results have important implications for medical genomics, as STR allelic variation is associated with >40 diseases. STR nonallelic transcript variation can also contribute to disease phenotype. The MLE and empirical rates presented here can be used to evaluate the probability of disease-associated transcripts arising due to RDD. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  7. Analysis and Compensation of Modulation Angular Rate Error Based on Missile-Borne Rotation Semi-Strapdown Inertial Navigation System.

    PubMed

    Zhang, Jiayu; Li, Jie; Zhang, Xi; Che, Xiaorui; Huang, Yugang; Feng, Kaiqiang

    2018-05-04

    The Semi-Strapdown Inertial Navigation System (SSINS) provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS) inertial measurement unit (MIMU) outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS) is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions.

  8. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  9. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....102 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  10. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  11. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  12. Impact of an antiretroviral stewardship strategy on medication error rates.

    PubMed

    Shea, Katherine M; Hobbs, Athena Lv; Shumake, Jason D; Templet, Derek J; Padilla-Tolentino, Eimeira; Mondy, Kristin E

    2018-05-02

    The impact of an antiretroviral stewardship strategy on medication error rates was evaluated. This single-center, retrospective, comparative cohort study included patients at least 18 years of age infected with human immunodeficiency virus (HIV) who were receiving antiretrovirals and admitted to the hospital. A multicomponent approach was developed and implemented and included modifications to the order-entry and verification system, pharmacist education, and a pharmacist-led antiretroviral therapy checklist. Pharmacists performed prospective audits using the checklist at the time of order verification. To assess the impact of the intervention, a retrospective review was performed before and after implementation to assess antiretroviral errors. Totals of 208 and 24 errors were identified before and after the intervention, respectively, resulting in a significant reduction in the overall error rate ( p < 0.001). In the postintervention group, significantly lower medication error rates were found in both patient admissions containing at least 1 medication error ( p < 0.001) and those with 2 or more errors ( p < 0.001). Significant reductions were also identified in each error type, including incorrect/incomplete medication regimen, incorrect dosing regimen, incorrect renal dose adjustment, incorrect administration, and the presence of a major drug-drug interaction. A regression tree selected ritonavir as the only specific medication that best predicted more errors preintervention ( p < 0.001); however, no antiretrovirals reliably predicted errors postintervention. An antiretroviral stewardship strategy for hospitalized HIV patients including prospective audit by staff pharmacists through use of an antiretroviral medication therapy checklist at the time of order verification decreased error rates. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  13. GEO Collisional Risk Assessment Based on Analysis of NASA-WISE Data and Modeling

    DTIC Science & Technology

    2015-10-18

    GEO Collisional Risk Assessment Based on Analysis of NASA -WISE Data and Modeling Jeremy Murray Krezan1, Samantha Howard1, Phan D. Dao1, Derek...Surka2 1AFRL Space Vehicles Directorate,2Applied Technology Associates Incorporated From December 2009 through 2011 the NASA Wide-Field Infrared...of known debris. The NASA -WISE GEO belt debris population adds potentially thousands previously uncataloged objects. This paper describes

  14. The Level of Test-Wiseness for the Students of Arts and Science Faculty at Sharourah and Its Relationship with Some Variables

    ERIC Educational Resources Information Center

    Otoum, Abedalqader; Khalaf, Hisham Bani; Bajbeer, Abedalqader; Hamad, Hassan Bani

    2015-01-01

    This study aimed to identify the level of using Test-wiseness strategies for the students of arts and sciences Faculty at Sharourah and its relationship with some variables. a questionnaire was designed which consisted of (29) items measuring three domains of Test-wiseness strategies. It was applied on a sample which consisted of (299) students.…

  15. Simulating (log(c) n)-wise Independence in NC

    DTIC Science & Technology

    1989-05-01

    independent) distribution. However, Xk (A) = (1x(I))k = Z Z y (i,)...X(ik). iEA ijEAi 2 EA IkEA So Lemma 2.4 applies to show that any k-wise independent...AEA AEAi1EA ikEA So henceforth we want an X such that F(X) >_ E[F(X)]. 2.5 Generating k-Wise Independent Variables It still remains to demonstrate a k

  16. The First Brown Dwarf Discovered by the Backyard Worlds: Planet 9 Citizen Science Project

    NASA Astrophysics Data System (ADS)

    Kuchner, Marc J.; Faherty, Jacqueline K.; Schneider, Adam C.; Meisner, Aaron M.; Filippazzo, Joseph C.; Gagné, Jonathan; Trouille, Laura; Silverberg, Steven M.; Castro, Rosa; Fletcher, Bob; Mokaev, Khasan; Stajic, Tamara

    2017-06-01

    The Wide-field Infrared Survey Explorer (WISE) is a powerful tool for finding nearby brown dwarfs and searching for new planets in the outer solar system, especially with the incorporation of NEOWISE and NEOWISE-Reactivation data. However, so far, searches for brown dwarfs in WISE data have yet to take advantage of the full depth of the WISE images. To efficiently search this unexplored space via visual inspection, we have launched a new citizen science project, called “Backyard Worlds: Planet 9,” which asks volunteers to examine short animations composed of difference images constructed from time-resolved WISE coadds. We report the first new substellar object discovered by this project, WISEA J110125.95+540052.8, a T5.5 brown dwarf located approximately 34 pc from the Sun with a total proper motion of ˜0.″7 {{yr}}-1. WISEA J110125.95+540052.8 has a WISE W2 magnitude of W2=15.37+/- 0.09; our sensitivity to this source demonstrates the ability of citizen scientists to identify moving objects via visual inspection that are 0.9 mag fainter than the W2 single-exposure sensitivity, a threshold that has limited prior motion-based brown dwarf searches with WISE.

  17. When do latent class models overstate accuracy for diagnostic and other classifiers in the absence of a gold standard?

    PubMed

    Spencer, Bruce D

    2012-06-01

    Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.

  18. Estimating gene gain and loss rates in the presence of error in genome assembly and annotation using CAFE 3.

    PubMed

    Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W

    2013-08-01

    Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.

  19. Do choosing wisely tools meet criteria for patient decision aids? A descriptive analysis of patient materials

    PubMed Central

    Légaré, France; Hébert, Jessica; Goh, Larissa; Lewis, Krystina B; Leiva Portocarrero, Maria Ester; Robitaille, Hubert; Stacey, Dawn

    2016-01-01

    Objectives Choosing Wisely is a remarkable physician-led campaign to reduce unnecessary or harmful health services. Some of the literature identifies Choosing Wisely as a shared decision-making approach. We evaluated the patient materials developed by Choosing Wisely Canada to determine whether they meet the criteria for shared decision-making tools known as patient decision aids. Design Descriptive analysis of all Choosing Wisely Canada patient materials. Data source In May 2015, we selected all Choosing Wisely Canada patient materials from its official website. Main outcomes and measures Four team members independently extracted characteristics of the English materials using the International Patient Decision Aid Standards (IPDAS) modified 16-item minimum criteria for qualifying and certifying patient decision aids. The research team discussed discrepancies between data extractors and reached a consensus. Descriptive analysis was conducted. Results Of the 24 patient materials assessed, 12 were about treatments, 11 were about screening and 1 was about prevention. The median score for patient materials using IPDAS criteria was 10/16 (range: 8–11) for screening topics and 6/12 (range: 6–9) for prevention and treatment topics. Commonly missed criteria were stating the decision (21/24 did not), providing balanced information on option benefits/harms (24/24 did not), citing evidence (24/24 did not) and updating policy (24/24 did not). Out of 24 patient materials, only 2 met the 6 IPDAS criteria to qualify as patient decision aids, and neither of these 2 met the 6 certifying criteria. Conclusions Patient materials developed by Choosing Wisely Canada do not meet the IPDAS minimal qualifying or certifying criteria for patient decision aids. Modifications to the Choosing Wisely Canada patient materials would help to ensure that they qualify as patient decision aids and thus as more effective shared decision-making tools. PMID:27566638

  20. Derivation of an analytic expression for the error associated with the noise reduction rating

    NASA Astrophysics Data System (ADS)

    Murphy, William J.

    2005-04-01

    Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.

  1. Selection of Hidden Layer Neurons and Best Training Method for FFNN in Application of Long Term Load Forecasting

    NASA Astrophysics Data System (ADS)

    Singh, Navneet K.; Singh, Asheesh K.; Tripathy, Manoj

    2012-05-01

    For power industries electricity load forecast plays an important role for real-time control, security, optimal unit commitment, economic scheduling, maintenance, energy management, and plant structure planning etc. A new technique for long term load forecasting (LTLF) using optimized feed forward artificial neural network (FFNN) architecture is presented in this paper, which selects optimal number of neurons in the hidden layer as well as the best training method for the case study. The prediction performance of proposed technique is evaluated using mean absolute percentage error (MAPE) of Thailand private electricity consumption and forecasted data. The results obtained are compared with the results of classical auto-regressive (AR) and moving average (MA) methods. It is, in general, observed that the proposed method is prediction wise more accurate.

  2. Statistical Optimality in Multipartite Ranking and Ordinal Regression.

    PubMed

    Uematsu, Kazuki; Lee, Yoonkyung

    2015-05-01

    Statistical optimality in multipartite ranking is investigated as an extension of bipartite ranking. We consider the optimality of ranking algorithms through minimization of the theoretical risk which combines pairwise ranking errors of ordinal categories with differential ranking costs. The extension shows that for a certain class of convex loss functions including exponential loss, the optimal ranking function can be represented as a ratio of weighted conditional probability of upper categories to lower categories, where the weights are given by the misranking costs. This result also bridges traditional ranking methods such as proportional odds model in statistics with various ranking algorithms in machine learning. Further, the analysis of multipartite ranking with different costs provides a new perspective on non-smooth list-wise ranking measures such as the discounted cumulative gain and preference learning. We illustrate our findings with simulation study and real data analysis.

  3. Errors in laboratory medicine: practical lessons to improve patient safety.

    PubMed

    Howanitz, Peter J

    2005-10-01

    Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification, specimen acceptability, proficiency testing, critical value reporting, blood product wastage, and blood culture contamination. Error rate benchmarks for these performance measures were cited and recommendations for improving patient safety presented. Not only has each of the 8 performance measures proven practical, useful, and important for patient care, taken together, they also fulfill regulatory requirements. All laboratories should consider implementing these performance measures and standardizing their own scientific designs, data analysis, and error reduction strategies according to findings from these published studies.

  4. Land cover dynamics across the Great Plains and their influence on breeding birds: Potential artefact of data and analysis limitations

    Treesearch

    C. H. Flather; M. S. Knowles; L. S. Baggett

    2017-01-01

    The distribution and abundance of obligate grassland breeding birds in the US have declined across the Great Plains as native habitats have been converted to intensive human land use. A major finding of Scholtz et al. (2017: Table 3) was that the group-wise extinction rate among 13 common grassland nesting birds declined with increasing cropland. This conclusion runs...

  5. Comparisons of GLM and LMA Observations

    NASA Astrophysics Data System (ADS)

    Thomas, R. J.; Krehbiel, P. R.; Rison, W.; Stanley, M. A.; Attanasio, A.

    2017-12-01

    Observations from 3-dimensional VHF lightning mapping arrays (LMAs) provide a valuable basis for evaluating the spatial accuracy and detection efficiencies of observations from the recently launched, optical-based Geosynchronous Lightning Mapper (GLM). In this presentation, we describe results of comparing the LMA and GLM observations. First, the observations are compared spatially and temporally at the individual event (pixel) level for sets of individual discharges. For LMA networks in Florida, Colorado, and Oklahoma, the GLM observations are well correlated time-wise with LMA observations but are systematically offset by one- to two pixels ( 10 to 15 or 20 km) in a southwesterly direction from the actual lightning activity. The graphical comparisons show a similar location uncertainty depending on the altitude at which the scattered light is emitted from the parent cloud, due to being observed at slant ranges. Detection efficiencies (DEs) can be accurately determined graphically for intervals where individual flashes in a storm are resolved time-wise, and DEs and false alarm rates can be automated using flash sorting algorithms for overall and/or larger storms. This can be done as a function of flash size and duration, and generally shows high detection rates for larger flashes. Preliminary results during the May 1 2017 ER-2 overflight of Colorado storms indicate decreased detection efficiency if the storm is obscured by an overlying cloud layer.

  6. Deriving Stellar Masses for the ALFALFA α.100 Sample

    NASA Astrophysics Data System (ADS)

    Hess, Logan; Cornell 2017 Summer REU

    2018-01-01

    For this project, we explore different methods of deriving the stellar masses of galaxies in the ALFALFA (Arecibo Legacy Fast ALFA) α.100 survey. In particular, we measure the effectiveness of SED (Spectral Energy Distribution) on the sample. SED fitting was preformed by MAGPHYS (Multi-wavelength Analysis of Galaxy Physical Properties), utilizing a wide range of photometry in the UV, optical, and IR bands. Photometry was taken from GALAX GR6/7 (UV), SDSS DR13 (optical), WISE All-Sky (near-IR), and Herschel PACS/SPIRE (far-IR). The efficiency of SED fitting increases with a broader range of photometry, however detection rates varied significantly across the different bands. Using a more “comprehensive” sample of galaxies, the GSWLC-A (GALAX, SDSS, WISE Legacy Catalog All-Sky Survey), we aimed to measure which combination of bands provided the largest sample return with the lowest amount of uncertainty, which could then be used to estimate the masses of the galaxies in the α.100 sample.

  7. Real-time prediction and gating of respiratory motion in 3D space using extended Kalman filters and Gaussian process regression network

    NASA Astrophysics Data System (ADS)

    Bukhari, W.; Hong, S.-M.

    2016-03-01

    The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient’s breathing cycle. The algorithm, named EKF-GPRN+ , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN+ prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN+ implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN+ . The experimental results show that the EKF-GPRN+ algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN+ algorithm can further reduce the prediction error by employing the gating function, albeit at the cost of reduced duty cycle. The error reduction allows the clinical target volume to planning target volume (CTV-PTV) margin to be reduced, leading to decreased normal-tissue toxicity and possible dose escalation. The CTV-PTV margin is also evaluated to quantify clinical benefits of EKF-GPRN+ prediction.

  8. The statistical validity of nursing home survey findings.

    PubMed

    Woolley, Douglas C

    2011-11-01

    The Medicare nursing home survey is a high-stakes process whose findings greatly affect nursing homes, their current and potential residents, and the communities they serve. Therefore, survey findings must achieve high validity. This study looked at the validity of one key assessment made during a nursing home survey: the observation of the rate of errors in administration of medications to residents (med-pass). Statistical analysis of the case under study and of alternative hypothetical cases. A skilled nursing home affiliated with a local medical school. The nursing home administrators and the medical director. Observational study. The probability that state nursing home surveyors make a Type I or Type II error in observing med-pass error rates, based on the current case and on a series of postulated med-pass error rates. In the common situation such as our case, where med-pass errors occur at slightly above a 5% rate after 50 observations, and therefore trigger a citation, the chance that the true rate remains above 5% after a large number of observations is just above 50%. If the true med-pass error rate were as high as 10%, and the survey team wished to achieve 75% accuracy in determining that a citation was appropriate, they would have to make more than 200 med-pass observations. In the more common situation where med pass errors are closer to 5%, the team would have to observe more than 2000 med-passes to achieve even a modest 75% accuracy in their determinations. In settings where error rates are low, large numbers of observations of an activity must be made to reach acceptable validity of estimates for the true rates of errors. In observing key nursing home functions with current methodology, the State Medicare nursing home survey process does not adhere to well-known principles of valid error determination. Alternate approaches in survey methodology are discussed. Copyright © 2011 American Medical Directors Association. Published by Elsevier Inc. All rights reserved.

  9. How does aging affect the types of error made in a visual short-term memory ‘object-recall’ task?

    PubMed Central

    Sapkota, Raju P.; van der Linde, Ian; Pardhan, Shahina

    2015-01-01

    This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits. PMID:25653615

  10. How does aging affect the types of error made in a visual short-term memory 'object-recall' task?

    PubMed

    Sapkota, Raju P; van der Linde, Ian; Pardhan, Shahina

    2014-01-01

    This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits.

  11. Clinical biochemistry laboratory rejection rates due to various types of preanalytical errors.

    PubMed

    Atay, Aysenur; Demir, Leyla; Cuhadar, Serap; Saglam, Gulcan; Unal, Hulya; Aksun, Saliha; Arslan, Banu; Ozkan, Asuman; Sutcu, Recep

    2014-01-01

    Preanalytical errors, along the process from the beginning of test requests to the admissions of the specimens to the laboratory, cause the rejection of samples. The aim of this study was to better explain the reasons of rejected samples, regarding to their rates in certain test groups in our laboratory. This preliminary study was designed on the rejected samples in one-year period, based on the rates and types of inappropriateness. Test requests and blood samples of clinical chemistry, immunoassay, hematology, glycated hemoglobin, coagulation and erythrocyte sedimentation rate test units were evaluated. Types of inappropriateness were evaluated as follows: improperly labelled samples, hemolysed, clotted specimen, insufficient volume of specimen and total request errors. A total of 5,183,582 test requests from 1,035,743 blood collection tubes were considered. The total rejection rate was 0.65 %. The rejection rate of coagulation group was significantly higher (2.28%) than the other test groups (P < 0.001) including insufficient volume of specimen error rate as 1.38%. Rejection rates of hemolysis, clotted specimen and insufficient volume of sample error were found to be 8%, 24% and 34%, respectively. Total request errors, particularly, for unintelligible requests were 32% of the total for inpatients. The errors were especially attributable to unintelligible requests of inappropriate test requests, improperly labelled samples for inpatients and blood drawing errors especially due to insufficient volume of specimens in a coagulation test group. Further studies should be performed after corrective and preventive actions to detect a possible decrease in rejecting samples.

  12. Adaptive error detection for HDR/PDR brachytherapy: Guidance for decision making during real-time in vivo point dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk

    Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less

  13. Screening with Papanicolaou tests in Alberta

    PubMed Central

    Symonds, Christopher J.; Chen, Wenxin; Rose, Marianne Sarah; Cooke, Lara J.

    2018-01-01

    Abstract Objective To describe the prevalence and geographic distribution of cervical cancer screening, as well as the age groups of those undergoing screening, in Alberta, and to determine if screening practices conform to current guidelines and follow Choosing Wisely Canada recommendations. Design Descriptive study using data from the Alberta Ministry of Health Analytics and Performance Reporting Branch. Setting Alberta. Participants Women who had 1 or more Papanicolaou tests between 2011 and 2013. Main outcome measures Number of women aged 15 to 20 and those aged 70 and older who had 1 or more Pap tests in a 3-year period; year-to-year trends in screening rates for women in these 2 age groups; trends in screening rates in various geographic regions (ie, cities and zones) in Alberta; and the discipline of clinicians who ordered the Pap tests. Results Between 2011 and 2013, 805 632 women in the province of Alberta had 1 or more Pap tests for cervical cancer screening. Overall, 25 511 (17.5%) women aged 15 to 20 and 16 818 (10.3%) aged 70 and older were screened contrary to most existing guidelines. Screening rates varied markedly in different geographic regions of the province. Most Pap tests were ordered by family physicians or general practitioners. Conclusion Within the geographic regions of Alberta, provincial, national, and international guidelines for screening with Pap tests are inconsistently followed. This strongly echoes the need for clinicians and patients to consider the Choosing Wisely Canada recommendations and current guidelines for cervical cancer screening. PMID:29358254

  14. GISentinel: a software platform for automatic ulcer detection on capsule endoscopy videos

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Jiao, Heng; Meng, Fan; Leighton, Jonathon A.; Shabana, Pasha; Rentz, Lauri

    2014-03-01

    In this paper, we present a novel and clinically valuable software platform for automatic ulcer detection on gastrointestinal (GI) tract from Capsule Endoscopy (CE) videos. Typical CE videos take about 8 hours. They have to be reviewed manually by physicians to detect and locate diseases such as ulcers and bleedings. The process is time consuming. Moreover, because of the long-time manual review, it is easy to lead to miss-finding. Working with our collaborators, we were focusing on developing a software platform called GISentinel, which can fully automated GI tract ulcer detection and classification. This software includes 3 parts: the frequency based Log-Gabor filter regions of interest (ROI) extraction, the unique feature selection and validation method (e.g. illumination invariant feature, color independent features, and symmetrical texture features), and the cascade SVM classification for handling "ulcer vs. non-ulcer" cases. After the experiments, this SW gave descent results. In frame-wise, the ulcer detection rate is 69.65% (319/458). In instance-wise, the ulcer detection rate is 82.35%(28/34).The false alarm rate is 16.43% (34/207). This work is a part of our innovative 2D/3D based GI tract disease detection software platform. The final goal of this SW is to find and classification of major GI tract diseases intelligently, such as bleeding, ulcer, and polyp from the CE videos. This paper will mainly describe the automatic ulcer detection functional module.

  15. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  16. Investigating a population of infrared-bright gamma-ray burst host galaxies

    NASA Astrophysics Data System (ADS)

    Chrimes, Ashley A.; Stanway, Elizabeth R.; Levan, Andrew J.; Davies, Luke J. M.; Angus, Charlotte R.; Greis, Stephanie M. L.

    2018-07-01

    We identify and explore the properties of an infrared-bright gamma-ray burst (GRB) host population. Candidate hosts are selected by coincidence with sources in WISE, with matching to random coordinates and a false alarm probability analysis showing that the contamination fraction is ˜0.5. This methodology has already identified the host galaxy of GRB 080517. We combine survey photometry from Pan-STARRS, SDSS, APASS, 2MASS, GALEX, and WISE with our own WHT/ACAM and VLT/X-shooter observations to classify the candidates and identify interlopers. Galaxy SED fitting is performed using MAGPHYS, in addition to stellar template fitting, yielding 13 possible IR-bright hosts. A further seven candidates are identified from the previously published work. We report a candidate host for GRB 061002, previously unidentified as such. The remainder of the galaxies have already been noted as potential hosts. Comparing the IR-bright population properties including redshift z, stellar mass M⋆, star formation rate SFR, and V-band attenuation AV to GRB host catalogues in the literature, we find that the infrared-bright population is biased towards low z, high M⋆, and high AV. This naturally arises from their initial selection - local and dusty galaxies are more likely to have the required IR flux to be detected in WISE. We conclude that while IR-bright GRB hosts are not a physically distinct class, they are useful for constraining existing GRB host populations, particularly for long GRBs.

  17. Fuzzy C-mean clustering on kinetic parameter estimation with generalized linear least square algorithm in SPECT

    NASA Astrophysics Data System (ADS)

    Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan

    2006-03-01

    Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.

  18. Investigating a population of infrared-bright gamma-ray burst host galaxies

    NASA Astrophysics Data System (ADS)

    Chrimes, Ashley A.; Stanway, Elizabeth R.; Levan, Andrew J.; Davies, Luke J. M.; Angus, Charlotte R.; Greis, Stephanie M. L.

    2018-04-01

    We identify and explore the properties of an infrared-bright gamma-ray burst (GRB) host population. Candidate hosts are selected by coincidence with sources in WISE, with matching to random coordinates and a false alarm probability analysis showing that the contamination fraction is ˜ 0.5. This methodology has already identified the host galaxy of GRB 080517. We combine survey photometry from Pan-STARRS, SDSS, APASS, 2MASS, GALEX and WISE with our own WHT/ACAM and VLT/X-shooter observations to classify the candidates and identify interlopers. Galaxy SED fitting is performed using MAGPHYS, in addition to stellar template fitting, yielding 13 possible IR-bright hosts. A further 7 candidates are identified from previously published work. We report a candidate host for GRB 061002, previously unidentified as such. The remainder of the galaxies have already been noted as potential hosts. Comparing the IR-bright population properties including redshift z, stellar mass M⋆, star formation rate SFR and V-band attenuation AV to GRB host catalogues in the literature, we find that the infrared-bright population is biased toward low z, high M⋆ and high AV. This naturally arises from their initial selection - local and dusty galaxies are more likely to have the required IR flux to be detected in WISE. We conclude that while IR-bright GRB hosts are not a physically distinct class, they are useful for constraining existing GRB host populations, particularly for long GRBs.

  19. Discovery of a Lensed Ultrabright Submillimeter Galaxy at z = 2.0439

    NASA Astrophysics Data System (ADS)

    Díaz-Sánchez, A.; Iglesias-Groth, S.; Rebolo, R.; Dannerbauer, H.

    2017-07-01

    We report an ultrabright lensed submillimeter galaxy (SMG) at z = 2.0439, WISE J132934.18+224327.3, identified as a result of a full-sky cross-correlation of the AllWISE and Planck compact source catalogs aimed to search for bright analogs of the SMG SMM J2135, the Cosmic Eyelash. Inspection of archival SCUBA-2 observations of the candidates revealed a source with fluxes ({S}850μ {{m}}=130 mJy) consistent with the Planck measurements. The centroid of the SCUBA-2 source coincides within 1 arcsec with the position of the AllWISE mid-IR source, and, remarkably, with an arc-shaped lensed galaxy in HST images at visible wavelengths. Low-resolution rest-frame UV-optical spectroscopy of this lensed galaxy obtained with 10.4 m GTC reveals the typical absorption lines of a starburst galaxy. Gemini-N near-IR spectroscopy provided a clear detection of {{{H}}}α emission. The lensed source appears to be gravitationally magnified by a massive foreground galaxy cluster lens at z = 0.44 modeling with Lenstool indicates a lensing amplification factor of 11 ± 2. We determine an intrinsic rest-frame 8-1000 μm luminosity, {L}{IR}, of (1.3+/- 0.1)× {10}13 {L}⊙ , and a likely star formation rate (SFR) of ˜ 500{--}2000 {M}⊙ {{yr}}-1. The SED shows a remarkable similarity with the Cosmic Eyelash from optical-mid/IR to submillimeter/radio, albeit at higher fluxes.

  20. 7 CFR 275.23 - Determination of State agency program performance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM PERFORMANCE REPORTING... section, the adjusted regressed payment error rate shall be calculated to yield the State agency's payment error rate. The adjusted regressed payment error rate is given by r 1″ + r 2″. (ii) If FNS determines...

  1. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations.

    PubMed

    Derks, E M; Zwinderman, A H; Gamazon, E R

    2017-05-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.

  2. KSC-2009-6554

    NASA Image and Video Library

    2009-10-20

    VANDENBERG AIR FORCE BASE, Calif. - The logo for NASA's Wide-field Infrared Survey Explorer, or WISE, mission. WISE will survey the entire sky at infrared wavelengths, creating a cosmic clearinghouse of hundreds of millions of objects which will be catalogued and provide a vast storehouse of knowledge about the solar system, the Milky Way, and the universe. Launch is scheduled for Dec. 9. For additional information, visit http://www.nasa.gov/wise. Photo credit: NASA/Doug Kulkow, VAFB

  3. VizieR Online Data Catalog: Luminosity and redshift of galaxies from WISE/SDSS (Toba+, 2014)

    NASA Astrophysics Data System (ADS)

    Toba, Y.; Oyabu, S.; Matsuhara, H.; Malkan, M. A.; Gandhi, P.; Nakagawa, T.; Isobe, N.; Shirahata, M.; Oi, N.; Ohyama, Y.; Takita, S.; Yamauchi, C.; Yano, K.

    2017-07-01

    We selected 12 and 22 um flux-limited galaxies based on the WISE (Cat. II/311) and SDSS (Cat. II/294) catalogs, and these galaxies were then classified into five types according to their optical spectroscopic information in the SDSS catalog. For spectroscopically classified galaxies, we constructed the luminosity functions using the 1/Vmax method, considering the detection limit of the WISE and SDSS catalogs. (1 data file).

  4. Improving mass-univariate analysis of neuroimaging data by modelling important unknown covariates: Application to Epigenome-Wide Association Studies.

    PubMed

    Guillaume, Bryan; Wang, Changqing; Poh, Joann; Shen, Mo Jun; Ong, Mei Lyn; Tan, Pei Fang; Karnani, Neerja; Meaney, Michael; Qiu, Anqi

    2018-06-01

    Statistical inference on neuroimaging data is often conducted using a mass-univariate model, equivalent to fitting a linear model at every voxel with a known set of covariates. Due to the large number of linear models, it is challenging to check if the selection of covariates is appropriate and to modify this selection adequately. The use of standard diagnostics, such as residual plotting, is clearly not practical for neuroimaging data. However, the selection of covariates is crucial for linear regression to ensure valid statistical inference. In particular, the mean model of regression needs to be reasonably well specified. Unfortunately, this issue is often overlooked in the field of neuroimaging. This study aims to adopt the existing Confounder Adjusted Testing and Estimation (CATE) approach and to extend it for use with neuroimaging data. We propose a modification of CATE that can yield valid statistical inferences using Principal Component Analysis (PCA) estimators instead of Maximum Likelihood (ML) estimators. We then propose a non-parametric hypothesis testing procedure that can improve upon parametric testing. Monte Carlo simulations show that the modification of CATE allows for more accurate modelling of neuroimaging data and can in turn yield a better control of False Positive Rate (FPR) and Family-Wise Error Rate (FWER). We demonstrate its application to an Epigenome-Wide Association Study (EWAS) on neonatal brain imaging and umbilical cord DNA methylation data obtained as part of a longitudinal cohort study. Software for this CATE study is freely available at http://www.bioeng.nus.edu.sg/cfa/Imaging_Genetics2.html. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  5. Retrievals of Surface Air Temperature Using Multiple Satellite Data Combinations over Complex Terrain in the Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Jang, K.; Won, M.; Yoon, S.; Lim, J.

    2016-12-01

    Surface air temperature (Tair) is a fundamental factor for terrestrial environments and plays a major role in the fields of applied meteorology, climatology, and ecology. The satellite remotely sensed data offers the opportunity to estimate Tair on the earth's surface with high spatial and temporal resolutions. The Moderate Resolution Imaging Spectroradiometer (MODIS) provides effective Tair retrievals although restricted to clear sky condition. MODIS Tair over complex terrain can result in significant retrieval errors due to the retrieval height mismatch to the elevation of local weather stations. In this study, we propose the methodology to estimate Tair over complex terrain for all sky conditions using multiple satellite data fusion based on the pixel-wise regression method. The combination of synergistic information from MODIS Tair and the brightness temperature (Tb) retrievals at 37 GHz frequency from the satellite microwave sensor were used for analysis. The air temperature lapse rate was applied to estimate the near-surface Tair considering the complex terrain such as mountainous regions. The retrieval results produced from this study showed a good agreement (RMSE < 2.5 K) with weather measurements from the Korea Forest Service (KFS) for mountain regions and the Korea Meteorology Administration (KMA). The gaps in the MODIS Tair data due to cloud contamination were successfully filled using the proposed method which yielded similar accuracy as retrievals of clear sky. The results of this study indicate that the satellite data fusion can continuously produce Tair retrievals with reasonable accuracy and that the application of the temperature lapse rate can lead to improvement of the reliability over complex terrains such as the Korean Peninsula.

  6. Statistical interpretation of machine learning-based feature importance scores for biomarker discovery.

    PubMed

    Huynh-Thu, Vân Anh; Saeys, Yvan; Wehenkel, Louis; Geurts, Pierre

    2012-07-01

    Univariate statistical tests are widely used for biomarker discovery in bioinformatics. These procedures are simple, fast and their output is easily interpretable by biologists but they can only identify variables that provide a significant amount of information in isolation from the other variables. As biological processes are expected to involve complex interactions between variables, univariate methods thus potentially miss some informative biomarkers. Variable relevance scores provided by machine learning techniques, however, are potentially able to highlight multivariate interacting effects, but unlike the p-values returned by univariate tests, these relevance scores are usually not statistically interpretable. This lack of interpretability hampers the determination of a relevance threshold for extracting a feature subset from the rankings and also prevents the wide adoption of these methods by practicians. We evaluated several, existing and novel, procedures that extract relevant features from rankings derived from machine learning approaches. These procedures replace the relevance scores with measures that can be interpreted in a statistical way, such as p-values, false discovery rates, or family wise error rates, for which it is easier to determine a significance level. Experiments were performed on several artificial problems as well as on real microarray datasets. Although the methods differ in terms of computing times and the tradeoff, they achieve in terms of false positives and false negatives, some of them greatly help in the extraction of truly relevant biomarkers and should thus be of great practical interest for biologists and physicians. As a side conclusion, our experiments also clearly highlight that using model performance as a criterion for feature selection is often counter-productive. Python source codes of all tested methods, as well as the MATLAB scripts used for data simulation, can be found in the Supplementary Material.

  7. Grey matter volume increase following electroconvulsive therapy in patients with late life depression: a longitudinal MRI study

    PubMed Central

    Bouckaert, Filip; De Winter, François-Laurent; Emsell, Louise; Dols, Annemieke; Rhebergen, Didi; Wampers, Martien; Sunaert, Stefan; Stek, Max; Sienaert, Pascal; Vandenbulcke, Mathieu

    2016-01-01

    Background The evidence on the mechanisms of action of electroconvulsive therapy (ECT) has grown over the past decades. Recent studies show an ECT-related increase in hippocampal, amygdala and subgenual cortex volume. We examined grey matter volume changes following ECT using voxel-based morphometry (VBM) whole brain analysis in patients with severe late life depression (LLD). Methods Elderly patients with unipolar depression were treated twice weekly with right unilateral ECT until remission on the Montgomery–Åsberg Depression Rating Scale (MADRS) was achieved. Cognition (Mini Mental State Examination) and psychomotor changes (CORE Assessment) were monitored at baseline and 1 week after the last session of ECT. We performed 3 T structural MRI at both time points. We used the VBM8 toolbox in SPM8 to study grey matter volume changes. Paired t tests were used to compare pre- and post-ECT grey matter volume (voxel-level family-wise error threshold p < 0.05) and to assess clinical response. Results Twenty-eight patients (mean age 71.9 ± 7.8 yr, 8 men) participated in our study. Patients received a mean of 11.2 ± 4 sessions of ECT. The remission rate was 78.6%. Cognition, psychomotor agitation and psychomotor retardation improved significantly (p < 0.001). Right- hemispheric grey matter volume was increased in the caudate nucleus, medial temporal lobe (including hippocampus and amygdala), insula and posterior superior temporal regions but did not correlate with MADRS score. Grey matter volume increase in the caudate nucleus region correlated significantly with total CORE Assessment score (r = 0.63; p < 0.001). Limitations Not all participants were medication-free. Conclusion Electroconvulsive therapy in patients with LLD is associated with significant grey matter volume increase, which is most pronounced ipsilateral to the stimulation side. PMID:26395813

  8. Improving the quality of cognitive screening assessments: ACEmobile, an iPad-based version of the Addenbrooke's Cognitive Examination-III.

    PubMed

    Newman, Craig G J; Bevins, Adam D; Zajicek, John P; Hodges, John R; Vuillermoz, Emil; Dickenson, Jennifer M; Kelly, Denise S; Brown, Simona; Noad, Rupert F

    2018-01-01

    Ensuring reliable administration and reporting of cognitive screening tests are fundamental in establishing good clinical practice and research. This study captured the rate and type of errors in clinical practice, using the Addenbrooke's Cognitive Examination-III (ACE-III), and then the reduction in error rate using a computerized alternative, the ACEmobile app. In study 1, we evaluated ACE-III assessments completed in National Health Service (NHS) clinics ( n  = 87) for administrator error. In study 2, ACEmobile and ACE-III were then evaluated for their ability to capture accurate measurement. In study 1, 78% of clinically administered ACE-IIIs were either scored incorrectly or had arithmetical errors. In study 2, error rates seen in the ACE-III were reduced by 85%-93% using ACEmobile. Error rates are ubiquitous in routine clinical use of cognitive screening tests and the ACE-III. ACEmobile provides a framework for supporting reduced administration, scoring, and arithmetical error during cognitive screening.

  9. Documentation of study medication dispensing in a prospective large randomized clinical trial: experiences from the ARISTOTLE Trial.

    PubMed

    Alexander, John H; Levy, Elliott; Lawrence, Jack; Hanna, Michael; Waclawski, Anthony P; Wang, Junyuan; Califf, Robert M; Wallentin, Lars; Granger, Christopher B

    2013-09-01

    In ARISTOTLE, apixaban resulted in a 21% reduction in stroke, a 31% reduction in major bleeding, and an 11% reduction in death. However, approval of apixaban was delayed to investigate a statement in the clinical study report that "7.3% of subjects in the apixaban group and 1.2% of subjects in the warfarin group received, at some point during the study, a container of the wrong type." Rates of study medication dispensing error were characterized through reviews of study medication container tear-off labels in 6,520 participants from randomly selected study sites. The potential effect of dispensing errors on study outcomes was statistically simulated in sensitivity analyses in the overall population. The rate of medication dispensing error resulting in treatment error was 0.04%. Rates of participants receiving at least 1 incorrect container were 1.04% (34/3,273) in the apixaban group and 0.77% (25/3,247) in the warfarin group. Most of the originally reported errors were data entry errors in which the correct medication container was dispensed but the wrong container number was entered into the case report form. Sensitivity simulations in the overall trial population showed no meaningful effect of medication dispensing error on the main efficacy and safety outcomes. Rates of medication dispensing error were low and balanced between treatment groups. The initially reported dispensing error rate was the result of data recording and data management errors and not true medication dispensing errors. These analyses confirm the previously reported results of ARISTOTLE. © 2013.

  10. Propagation of stage measurement uncertainties to streamflow time series

    NASA Astrophysics Data System (ADS)

    Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary

    2016-04-01

    Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.

  11. Quantifying measurement uncertainties in ADCP measurements in non-steady, inhomogeneous flow

    NASA Astrophysics Data System (ADS)

    Schäfer, Stefan

    2017-04-01

    The author presents a laboratory study of fixed-platform four-beam ADCP and three-beam ADV measurements in the tailrace of a micro hydro power setup with a 35kW Kaplan-turbine and 2.5m head. The datasets discussed quantify measurement uncertainties of the ADCP measurement technique coming from non-steady, inhomogeneous flow. For constant discharge of 1.5m3/s, two different flow scenarios were investigated: one being the regular tailrace flow downstream the draft tube and the second being a straightened, less inhomogeneous flow, which was generated by the use of a flow straightening device: A rack of diameter 40mm pipe sections was mounted right behind the draft tube. ADCP measurements (sampling rate 1.35Hz) were conducted in three distances behind the draft tube and compared bin-wise to measurements of three simultaneously measuring ADV probes (sampling rate 64Hz). The ADV probes were aligned horizontally and the ADV bins were placed in the centers of two facing ADCP bins and in the vertical under the ADCP probe of the corresponding depth. Rotating the ADV probes by 90° allowed for measurements of the other two facing ADCP bins. For reasons of mutual probe interaction, ADCP and ADV measurements were not conducted at the same time. The datasets were evaluated by using mean and fluctuation velocities. Turbulence parameters were calculated and compared as far as applicable. Uncertainties coming from non-steady flow were estimated with the normalized mean square error und evaluated by comparing long-term measurements of 60 minutes to shorter measurement intervals. Uncertainties coming from inhomogeneous flow were evaluated by comparison of ADCP with ADV data along the ADCP beams where ADCP data were effectively measured and in the vertical under the ADCP probe where velocities of the ADCP measurements were displayed. Errors coming from non-steady flow could be compensated through sufficiently long measurement intervals with high enough sampling rates depending on the turbulence scales of the flow. In case of heterogeneous distributions of vertical velocity components in the ADCP beams, the resulting errors significantly biased the mean velocities and could not be recognized by sole ADCP measurements. For the straightened flow scenario, the results showed good agreement of ADCP and ADV data for mean velocities, whereas the ADCP data consistently overestimated turbulence intensities by a factor of 2. Reynolds stresses were in good agreement as well as were turbulent kinetic energies, apart from one measurement with outliers of up to 30%. For the tailrace flow scenario, the mean velocities from the ADCP data underestimated the ADV data by 23%. Turbulence intensities from the ADCP data were fluctuant, overestimated the ADV data by factors of up to 2.8 and showed spatial discrepancies over the depth. Reynolds stresses were only partly in good agreement and turbulent kinetic energies were over- and underestimated in a range of [-50; +30] %.

  12. Prescribing errors during hospital inpatient care: factors influencing identification by pharmacists.

    PubMed

    Tully, Mary P; Buchan, Iain E

    2009-12-01

    To investigate the prevalence of prescribing errors identified by pharmacists in hospital inpatients and the factors influencing error identification rates by pharmacists throughout hospital admission. 880-bed university teaching hospital in North-west England. Data about prescribing errors identified by pharmacists (median: 9 (range 4-17) collecting data per day) when conducting routine work were prospectively recorded on 38 randomly selected days over 18 months. Proportion of new medication orders in which an error was identified; predictors of error identification rate, adjusted for workload and seniority of pharmacist, day of week, type of ward or stage of patient admission. 33,012 new medication orders were reviewed for 5,199 patients; 3,455 errors (in 10.5% of orders) were identified for 2,040 patients (39.2%; median 1, range 1-12). Most were problem orders (1,456, 42.1%) or potentially significant errors (1,748, 50.6%); 197 (5.7%) were potentially serious; 1.6% (n = 54) were potentially severe or fatal. Errors were 41% (CI: 28-56%) more likely to be identified at patient's admission than at other times, independent of confounders. Workload was the strongest predictor of error identification rates, with 40% (33-46%) less errors identified on the busiest days than at other times. Errors identified fell by 1.9% (1.5-2.3%) for every additional chart checked, independent of confounders. Pharmacists routinely identify errors but increasing workload may reduce identification rates. Where resources are limited, they may be better spent on identifying and addressing errors immediately after admission to hospital.

  13. Resampling-Based Empirical Bayes Multiple Testing Procedures for Controlling Generalized Tail Probability and Expected Value Error Rates: Focus on the False Discovery Rate and Simulation Study

    PubMed Central

    Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.

    2014-01-01

    Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138

  14. Perception of self and significant others by alcoholics and nonalcoholics.

    PubMed

    Quereshi, M Y; Soat, D M

    1976-01-01

    Ratings of self and 15 significant others on four personality factors by 47 alcoholic and 90 nonalcoholic males were analyzed by means of step-wise regression analysis and multivariate analysis of covariance. Alcoholics rated themselves less positively on extraversion and self-assertiveness (lower mean on extraversion and higher on self-assertiveness) and also judged intimate others (father, mother, and spouse) less positively on unhappiness, extraversion, and productive persistence (higher mean on unhappiness and lower means on extraversion and productive persistence). There were no significant differences between the two groups in judging persons as a whole or in the degree of differentiation that was exhibited in rating all 16 persons including self.

  15. THE FIRST HUNDRED BROWN DWARFS DISCOVERED BY THE WIDE-FIELD INFRARED SURVEY EXPLORER (WISE)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davy Kirkpatrick, J.; Gelino, Christopher R.; Griffith, Roger L.

    2011-12-01

    We present ground-based spectroscopic verification of 6 Y dwarfs (see also Cushing et al.), 89 T dwarfs, 8 L dwarfs, and 1 M dwarf identified by the Wide-field Infrared Survey Explorer (WISE). Eighty of these are cold brown dwarfs with spectral types {>=}T6, six of which have been announced earlier by Mainzer et al. and Burgasser et al. We present color-color and color-type diagrams showing the locus of M, L, T, and Y dwarfs in WISE color space. Near-infrared and, in a few cases, optical spectra are presented for these discoveries. Near-infrared classifications as late as early Y are presentedmore » and objects with peculiar spectra are discussed. Using these new discoveries, we are also able to extend the optical T dwarf classification scheme from T8 to T9. After deriving an absolute WISE 4.6 {mu}m (W2) magnitude versus spectral type relation, we estimate spectrophotometric distances to our discoveries. We also use available astrometric measurements to provide preliminary trigonometric parallaxes to four of our discoveries, which have types of L9 pec (red), T8, T9, and Y0; all of these lie within 10 pc of the Sun. The Y0 dwarf, WISE 1541-2250, is the closest at 2.8{sup +1.3}{sub -0.6} pc; if this 2.8 pc value persists after continued monitoring, WISE 1541-2250 will become the seventh closest stellar system to the Sun. Another 10 objects, with types between T6 and >Y0, have spectrophotometric distance estimates also placing them within 10 pc. The closest of these, the T6 dwarf WISE 1506+7027, is believed to fall at a distance of {approx}4.9 pc. WISE multi-epoch positions supplemented with positional info primarily from the Spitzer/Infrared Array Camera allow us to calculate proper motions and tangential velocities for roughly one-half of the new discoveries. This work represents the first step by WISE to complete a full-sky, volume-limited census of late-T and Y dwarfs. Using early results from this census, we present preliminary, lower limits to the space density of these objects and discuss constraints on both the functional form of the mass function and the low-mass limit of star formation.« less

  16. Topological quantum computing with a very noisy network and local error rates approaching one percent.

    PubMed

    Nickerson, Naomi H; Li, Ying; Benjamin, Simon C

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.

  17. Analysis and Compensation of Modulation Angular Rate Error Based on Missile-Borne Rotation Semi-Strapdown Inertial Navigation System

    PubMed Central

    Zhang, Jiayu; Li, Jie; Zhang, Xi; Che, Xiaorui; Huang, Yugang; Feng, Kaiqiang

    2018-01-01

    The Semi-Strapdown Inertial Navigation System (SSINS) provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS) inertial measurement unit (MIMU) outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS) is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions. PMID:29734707

  18. The 'wise list'- a comprehensive concept to select, communicate and achieve adherence to recommendations of essential drugs in ambulatory care in Stockholm.

    PubMed

    Gustafsson, Lars L; Wettermark, Björn; Godman, Brian; Andersén-Karlsson, Eva; Bergman, Ulf; Hasselström, Jan; Hensjö, Lars-Olof; Hjemdahl, Paul; Jägre, Ingrid; Julander, Margaretha; Ringertz, Bo; Schmidt, Daniel; Sjöberg, Susan; Sjöqvist, Folke; Stiller, Carl-Olav; Törnqvist, Elisabeth; Tryselius, Rolf; Vitols, Sigurd; von Bahr, Christer

    2011-04-01

    The aim was to present and evaluate the impact of a comprehensive strategy over 10 years to select, communicate and achieve adherence to essential drug recommendations (EDR) in ambulatory care in a metropolitan healthcare region. EDRs were issued and launched as a 'Wise List' by the regional Drug and Therapeutics Committee in Stockholm. This study presents the concept by: (i) documenting the process for selecting, communicating and monitoring the impact of the 'Wise List'; (ii) analysing the variation in the number of drug substances recommended between 2000 and 2010; (iii) assessing the attitudes to the 'Wise List' among prescribers and the public; (iv) evaluating the adherence to recommendations between 2003 and 2009. The 'Wise List' consistently contained 200 drug substances for treating common diseases. The drugs were selected based on their efficacy, safety, suitability and cost-effectiveness. The 'Wise List' was known among one-third of a surveyed sample of the public in 2002 after initial marketing campaigns. All surveyed prescribers knew about the concept and 81% found the recommendations trustworthy in 2005. Adherence to recommendations increased from 69% in 1999 to 77% in 2009. In primary care, adherence increased from 83% to 87% from 2003 to 2009. The coefficient of variation (CV%) decreased from 6.1% to 3.8% for 156 healthcare centres between these years. The acceptance of the 'Wise List' in terms of trust among physicians and among the public and increased adherence may be explained by clear criteria for drug recommendations, a comprehensive communication strategy, electronic access to recommendations, continuous medical education and involvement of professional networks and patients.

  19. A new scale for assessing wisdom based on common domains and a neurobiological model: The San Diego Wisdom Scale (SD-WISE).

    PubMed

    Thomas, Michael L; Bangen, Katherine J; Palmer, Barton W; Sirkin Martin, Averria; Avanzino, Julie A; Depp, Colin A; Glorioso, Danielle; Daly, Rebecca E; Jeste, Dilip V

    2017-09-08

    Wisdom is an ancient concept that has gained new interest among clinical researchers as a complex trait relevant to well-being and healthy aging. As the empirical data regarding wisdom have grown, several measures have been used to assess an individual's level of wisdom. However, none of these measures has been based on a construct of wisdom with neurobiological underpinnings. We sought to develop a new scale, the San Diego Wisdom Scale (SD-WISE), which builds upon recent gains in the understanding of psychological and neurobiological models of the trait. Data were collected from 524 community-dwelling adults age 25-104 years as part of a structured multi-cohort study of adult lifespan. Participants were administered the SD-WISE along with two existing measures of wisdom that have been shown to have good psychometric properties. Factor analyses confirmed the hypothesized measurement model. SD-WISE total scores were reliable, demonstrated convergent and discriminant validity, and correlated, as hypothesized, negatively with emotional distress, but positively with well-being. However, the magnitudes of these associations were small, suggesting that the SD-WISE is not just a global measure of mental state. The results support the reliability and validity of SD-WISE scores. Study limitations are discussed. The SD-WISE, with good psychometric properties, a brief administration time, and a measurement model that is consistent with commonly cited content domains of wisdom based on a putative neurobiological model, may be useful in clinical practice as well as in bio-psycho-social research, especially investigations into the neurobiology of wisdom and experimental interventions to enhance wisdom. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Full-Depth Coadds of the WISE and First-Year NEOWISE-Reactivation Images

    DOE PAGES

    Meisner, Aaron M.; Lang, Dustin; Schlegel, David J.

    2017-01-03

    The Near Earth Object Wide-field Infrared Survey Explorer (NEOWISE) Reactivation mission released data from its first full year of observations in 2015. This data set includes ~2.5 million exposures in each of W1 and W2, effectively doubling the amount of WISE imaging available at 3.4 μm and 4.6 μm relative to the AllWISE release. In this paper, we have created the first ever full-sky set of coadds combining all publicly available W1 and W2 exposures from both the AllWISE and NEOWISE-Reactivation (NEOWISER) mission phases. We employ an adaptation of the unWISE image coaddition framework, which preserves the native WISE angularmore » resolution and is optimized for forced photometry. By incorporating two additional scans of the entire sky, we not only improve the W1/W2 depths, but also largely eliminate time-dependent artifacts such as off-axis scattered moonlight. We anticipate that our new coadds will have a broad range of applications, including target selection for upcoming spectroscopic cosmology surveys, identification of distant/massive galaxy clusters, and discovery of high-redshift quasars. In particular, our full-depth AllWISE+NEOWISER coadds will be an important input for the Dark Energy Spectroscopic Instrument selection of luminous red galaxy and quasar targets. Our full-depth W1/W2 coadds are already in use within the DECam Legacy Survey (DECaLS) and Mayall z-band Legacy Survey (MzLS) reduction pipelines. Finally, much more work still remains in order to fully leverage NEOWISER imaging for astrophysical applications beyond the solar system.« less

  1. On the nature and correction of the spurious S-wise spiral galaxy winding bias in Galaxy Zoo 1

    NASA Astrophysics Data System (ADS)

    Hayes, Wayne B.; Davis, Darren; Silva, Pedro

    2017-04-01

    The Galaxy Zoo 1 catalogue displays a bias towards the S-wise winding direction in spiral galaxies, which has yet to be explained. The lack of an explanation confounds our attempts to verify the Cosmological Principle, and has spurred some debate as to whether a bias exists in the real Universe. The bias manifests not only in the obvious case of trying to decide if the universe as a whole has a winding bias, but also in the more insidious case of selecting which Galaxies to include in a winding direction survey. While the former bias has been accounted for in a previous image-mirroring study, the latter has not. Furthermore, the bias has never been corrected in the GZ1 catalogue, as only a small sample of the GZ1 catalogue was reexamined during the mirror study. We show that the existing bias is a human selection effect rather than a human chirality bias. In effect, the excess S-wise votes are spuriously 'stolen' from the elliptical and edge-on-disc categories, not the Z-wise category. Thus, when selecting a set of spiral galaxies by imposing a threshold T so that max (PS, PZ) > T or PS + PZ > T, we spuriously select more S-wise than Z-wise galaxies. We show that when a provably unbiased machine selects which galaxies are spirals independent of their chirality, the S-wise surplus vanishes, even if humans still determine the chirality. Thus, when viewed across the entire GZ1 sample (and by implication, the Sloan catalogue), the winding direction of arms in spiral galaxies as viewed from Earth is consistent with the flip of a fair coin.

  2. Time-Resolved Coadds and Forced Photometry of the WISE and NEOWISE-Reactivation Data

    NASA Astrophysics Data System (ADS)

    Schlegel, David

    We propose to produce full-sky, time-resolved coadds of the images collected from the NASA WISE (Wide-field Infrared Survey Explorer) satellite, including the WISE, NEOWISE, and two years of the NEOWISE-Reactivation (NEOWISE-R) mission phases. Catalogs of forced photometry over the SDSS footprint will be generated at six epochs and for the full image stack. The images and catalogs will be suitable for stellar and extragalactic studies. The WISE satellite scans the sky such that each part of the sky is visited every six months, with 10 or more exposures per visit. We propose to coadd these 10 or more exposures to produce one coadd per visit that is, one coadd each six months. For most parts of the sky, there is one visit during the original WISE mission, one visit during NEOWISE, and then, after a 33-month gap, four more visits during the NEOWISE-R mission. These data, over a six-year baseline, are compelling both for studies of variability and of proper motion of nearby stars, and AGN and quasars at high redshift. Furthermore, the full image coadds will add considerable depth to the existing unWISE and AllWISE coadds at 3.4¼m and 4.6¼m, thereby playing a critical role in enabling target selection for next-generation massive redshift surveys. We will utilize our new data products to map quasar variability to the depths required for the future DESI dark energy experiment, and to discover high-proper motion objects in the solar neighborhood of the Milky Way to 1.4 magnitudes greater depth than previous searches.

  3. Accuracy of cited “facts” in medical research articles: A review of study methodology and recalculation of quotation error rate

    PubMed Central

    2017-01-01

    Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or “facts,” are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval). PMID:28910404

  4. Accuracy of cited "facts" in medical research articles: A review of study methodology and recalculation of quotation error rate.

    PubMed

    Mogull, Scott A

    2017-01-01

    Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or "facts," are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval).

  5. The Relationship between Occurrence Timing of Dispensing Errors and Subsequent Danger to Patients under the Situation According to the Classification of Drugs by Efficacy.

    PubMed

    Tsuji, Toshikazu; Nagata, Kenichiro; Kawashiri, Takehiro; Yamada, Takaaki; Irisa, Toshihiro; Murakami, Yuko; Kanaya, Akiko; Egashira, Nobuaki; Masuda, Satohiro

    2016-01-01

    There are many reports regarding various medical institutions' attempts at the prevention of dispensing errors. However, the relationship between occurrence timing of dispensing errors and subsequent danger to patients has not been studied under the situation according to the classification of drugs by efficacy. Therefore, we analyzed the relationship between position and time regarding the occurrence of dispensing errors. Furthermore, we investigated the relationship between occurrence timing of them and danger to patients. In this study, dispensing errors and incidents in three categories (drug name errors, drug strength errors, drug count errors) were classified into two groups in terms of its drug efficacy (efficacy similarity (-) group, efficacy similarity (+) group), into three classes in terms of the occurrence timing of dispensing errors (initial phase errors, middle phase errors, final phase errors). Then, the rates of damage shifting from "dispensing errors" to "damage to patients" were compared as an index of danger between two groups and among three classes. Consequently, the rate of damage in "efficacy similarity (-) group" was significantly higher than that in "efficacy similarity (+) group". Furthermore, the rate of damage is the highest in "initial phase errors", the lowest in "final phase errors" among three classes. From the results of this study, it became clear that the earlier the timing of dispensing errors occurs, the more severe the damage to patients becomes.

  6. Agreeableness and Conscientiousness as Predictors of University Students' Self/Peer-Assessment Rating Error

    ERIC Educational Resources Information Center

    Birjandi, Parviz; Siyyari, Masood

    2016-01-01

    This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…

  7. National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?

    ERIC Educational Resources Information Center

    Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.

    2010-01-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…

  8. The Relationship of Error Rate and Comprehension in Second and Third Grade Oral Reading Fluency

    ERIC Educational Resources Information Center

    Abbott, Mary; Wills, Howard; Miller, Angela; Kaufman, Journ

    2012-01-01

    This study explored the relationships of oral reading speed and error rate on comprehension with second and third grade students with identified reading risk. The study included 920 second and 974 third graders. Results found a significant relationship between error rate, oral reading fluency, and reading comprehension performance, and…

  9. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  10. Community Wise: paving the way for empowerment in community reentry.

    PubMed

    Windsor, Liliane Cambraia; Jemal, Alexis; Benoit, Ellen

    2014-01-01

    Theoretical approaches traditionally applied in mental health and criminal justice interventions fail to address the historical and structural context that partially explains health disparities. Community Wise was developed to address this gap. It is a 12week group intervention informed by Critical Consciousness Theory and designed to prevent substance abuse, related health risk behaviors, psychological distress, and reoffending among individuals with a history of incarceration and substance abuse. This paper reports findings from the first implementation and pilot evaluation of Community Wise in two community-based organizations. This pre-posttest evaluation pilot-tested Community Wise and used findings to improve the intervention. Twenty-six participants completed a phone and clinical screening, baseline, 6- and 12-week follow-ups, and a focus group at the end of the intervention. Measures assessed participants' demographic information, psychological distress, substance use, criminal offending, HIV risk behaviors, community cohesion, community support, civic engagement, critical consciousness, ethnic identification, group cohesion, client satisfaction, and acquired treatment skills. Research methods were found to be feasible and useful in assessing the intervention. Results indicated that while Community Wise is a promising intervention, several changes need to be made in order to enhance the intervention. Community Wise is a new approach where oppressed individuals join in critical dialogue, tap into existing community resources, and devise, implement and evaluate their own community solutions to structural barriers. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Asteroid Euphrosyne as Seen by WISE

    NASA Image and Video Library

    2015-08-03

    The asteroid Euphrosyne glides across a field of background stars in this time-lapse view from NASA's WISE spacecraft. WISE obtained the images used to create this view over a period of about a day around May 17, 2010, during which it observed the asteroid four times. Because WISE (renamed NEOWISE in 2013) is an infrared telescope, it senses heat from asteroids. Euphrosyne is quite dark in visible light, but glows brightly at infrared wavelengths. This view is a composite of images taken at four different infrared wavelengths: 3.4 microns (color-coded blue), 4.6 microns (cyan), 12 microns (green) and 22 microns (red). The moving asteroid appears as a string of red dots because it is much cooler than the distant background stars. Stars have temperatures in the thousands of degrees, but the asteroid is cooler than room temperature. Thus the stars are represented by shorter wavelength (hotter) blue colors in this view, while the asteroid is shown in longer wavelength (cooler) reddish colors. The WISE spacecraft was put into hibernation in 2011 upon completing its goal of surveying the entire sky in infrared light. WISE cataloged three quarters of a billion objects, including asteroids, stars and galaxies. In August 2013, NASA decided to reinstate the spacecraft on a mission to find and characterize more asteroids. http://photojournal.jpl.nasa.gov/catalog/PIA19645

  12. Micro-PIV/LIF measurements on electrokinetically-driven flow in surface modified microchannels

    NASA Astrophysics Data System (ADS)

    Ichiyanagi, Mitsuhisa; Sasaki, Seiichi; Sato, Yohei; Hishida, Koichi

    2009-04-01

    Effects of surface modification patterning on flow characteristics were investigated experimentally by measuring electroosmotic flow velocities, which were obtained by micron-resolution particle image velocimetry using a confocal microscope. The depth-wise velocity was evaluated by using the continuity equation and the velocity data. The microchannel was composed of a poly(dimethylsiloxane) chip and a borosilicate cover-glass plate. Surface modification patterns were fabricated by modifying octadecyltrichlorosilane (OTS) on the glass surface. OTS can decrease the electroosmotic flow velocity compared to the velocity in the glass microchannel. For the surface charge varying parallel to the electric field, the depth-wise velocity was generated at the boundary area between OTS and the glass surfaces. For the surface charge varying perpendicular to the electric field, the depth-wise velocity did not form because the surface charge did not vary in the stream-wise direction. The surface charge pattern with the oblique stripes yielded a three-dimensional flow in a microchannel. Furthermore, the oblique patterning was applied to a mixing flow field in a T-shaped microchannel, and mixing efficiencies were evaluated from heterogeneity degree of fluorescent dye intensity, which was obtained by laser-induced fluorescence. It was found that the angle of the oblique stripes is an important factor to promote the span-wise and depth-wise momentum transport and contributes to the mixing flow in a microchannel.

  13. False Positives in Multiple Regression: Unanticipated Consequences of Measurement Error in the Predictor Variables

    ERIC Educational Resources Information Center

    Shear, Benjamin R.; Zumbo, Bruno D.

    2013-01-01

    Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…

  14. Simultaneous estimation of cross-validation errors in least squares collocation applied for statistical testing and evaluation of the noise variance components

    NASA Astrophysics Data System (ADS)

    Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad

    2018-02-01

    The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.

  15. Real-time prediction and gating of respiratory motion using an extended Kalman filter and Gaussian process regression

    NASA Astrophysics Data System (ADS)

    Bukhari, W.; Hong, S.-M.

    2015-01-01

    Motion-adaptive radiotherapy aims to deliver a conformal dose to the target tumour with minimal normal tissue exposure by compensating for tumour motion in real time. The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting and gating respiratory motion that utilizes a model-based and a model-free Bayesian framework by combining them in a cascade structure. The algorithm, named EKF-GPR+, implements a gating function without pre-specifying a particular region of the patient’s breathing cycle. The algorithm first employs an extended Kalman filter (LCM-EKF) to predict the respiratory motion and then uses a model-free Gaussian process regression (GPR) to correct the error of the LCM-EKF prediction. The GPR is a non-parametric Bayesian algorithm that yields predictive variance under Gaussian assumptions. The EKF-GPR+ algorithm utilizes the predictive variance from the GPR component to capture the uncertainty in the LCM-EKF prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification allows us to pause the treatment beam over such instances. EKF-GPR+ implements the gating function by using simple calculations based on the predictive variance with no additional detection mechanism. A sparse approximation of the GPR algorithm is employed to realize EKF-GPR+ in real time. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPR+. The experimental results show that the EKF-GPR+ algorithm effectively reduces the prediction error in a root-mean-square (RMS) sense by employing the gating function, albeit at the cost of a reduced duty cycle. As an example, EKF-GPR+ reduces the patient-wise RMS error to 37%, 39% and 42% in percent ratios relative to no prediction for a duty cycle of 80% at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The experiments also confirm that EKF-GPR+ controls the duty cycle with reasonable accuracy.

  16. Decrease in medical command errors with use of a "standing orders" protocol system.

    PubMed

    Holliman, C J; Wuerz, R C; Meador, S A

    1994-05-01

    The purpose of this study was to determine the physician medical command error rates and paramedic error rates after implementation of a "standing orders" protocol system for medical command. These patient-care error rates were compared with the previously reported rates for a "required call-in" medical command system (Ann Emerg Med 1992; 21(4):347-350). A secondary aim of the study was to determine if the on-scene time interval was increased by the standing orders system. Prospectively conducted audit of prehospital advanced life support (ALS) trip sheets was made at an urban ALS paramedic service with on-line physician medical command from three local hospitals. All ALS run sheets from the start time of the standing orders system (April 1, 1991) for a 1-year period ending on March 30, 1992 were reviewed as part of an ongoing quality assurance program. Cases were identified as nonjustifiably deviating from regional emergency medical services (EMS) protocols as judged by agreement of three physician reviewers (the same methodology as a previously reported command error study in the same ALS system). Medical command and paramedic errors were identified from the prehospital ALS run sheets and categorized. Two thousand one ALS runs were reviewed; 24 physician errors (1.2% of the 1,928 "command" runs) and eight paramedic errors (0.4% of runs) were identified. The physician error rate was decreased from the 2.6% rate in the previous study (P < .0001 by chi 2 analysis). The on-scene time interval did not increase with the "standing orders" system.(ABSTRACT TRUNCATED AT 250 WORDS)

  17. Quantifying Data Quality for Clinical Trials Using Electronic Data Capture

    PubMed Central

    Nahm, Meredith L.; Pieper, Carl F.; Cunningham, Maureen M.

    2008-01-01

    Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks. PMID:18725958

  18. Caring Wisely: A Program to Support Frontline Clinicians and Staff in Improving Healthcare Delivery and Reducing Costs.

    PubMed

    Gonzales, Ralph; Moriates, Christopher; Lau, Catherine; Valencia, Victoria; Imershein, Sarah; Rajkomar, Alvin; Prasad, Priya; Boscardin, Christy; Grady, Deborah; Johnston, S

    2017-08-01

    We describe a program called "Caring Wisely"®, developed by the University of California, San Francisco's (UCSF), Center for Healthcare Value, to increase the value of services provided at UCSF Health. The overarching goal of the Caring Wisely® program is to catalyze and advance delivery system redesign and innovations that reduce costs, enhance healthcare quality, and improve health outcomes. The program is designed to engage frontline clinicians and staff-aided by experienced implementation scientists-to develop and implement interventions specifically designed to address overuse, underuse, or misuse of services. Financial savings of the program are intended to cover the program costs. The theoretical underpinnings for the design of the Caring Wisely® program emphasize the importance of stakeholder engagement, behavior change theory, market (target audience) segmentation, and process measurement and feedback. The Caring Wisely® program provides an institutional model for using crowdsourcing to identify "hot spot" areas of low-value care, inefficiency and waste, and for implementing robust interventions to address these areas. © 2017 Society of Hospital Medicine.

  19. Effect of atmospheric turbulence on the bit error probability of a space to ground near infrared laser communications link using binary pulse position modulation and an avalanche photodiode detector

    NASA Technical Reports Server (NTRS)

    Safren, H. G.

    1987-01-01

    The effect of atmospheric turbulence on the bit error rate of a space-to-ground near infrared laser communications link is investigated, for a link using binary pulse position modulation and an avalanche photodiode detector. Formulas are presented for the mean and variance of the bit error rate as a function of signal strength. Because these formulas require numerical integration, they are of limited practical use. Approximate formulas are derived which are easy to compute and sufficiently accurate for system feasibility studies, as shown by numerical comparison with the exact formulas. A very simple formula is derived for the bit error rate as a function of signal strength, which requires only the evaluation of an error function. It is shown by numerical calculations that, for realistic values of the system parameters, the increase in the bit error rate due to turbulence does not exceed about thirty percent for signal strengths of four hundred photons per bit or less. The increase in signal strength required to maintain an error rate of one in 10 million is about one or two tenths of a db.

  20. Discovery of a Possible Early-T Thick-disk Subdwarf from the AllWISE2 Motion Survey

    NASA Astrophysics Data System (ADS)

    Kellogg, Kendra; Kirkpatrick, J. Davy; Metchev, Stanimir; Gagné, Jonathan; Faherty, Jacqueline K.

    2018-02-01

    We have discovered a potential T0 ± 1 subdwarf from a search for sources in the AllWISE2 Motion Survey that do not have counterparts in surveys at shorter wavelengths. With a tangential velocity of ∼170 km s‑1, this object—WISE J071121.36–573634.2—has kinematics that are consistent with the thick-disk population of the Milky Way. Spectral fits suggest a low-metallicity for this object but also allow for the possibility of unresolved multiplicity. If WISE J0711–5736 is indeed an sdT0 dwarf, it would be only the second early-T subdwarf discovered to date. This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile.

  1. Target and (Astro-)WISE technologies Data federations and its applications

    NASA Astrophysics Data System (ADS)

    Valentijn, E. A.; Begeman, K.; Belikov, A.; Boxhoorn, D. R.; Brinchmann, J.; McFarland, J.; Holties, H.; Kuijken, K. H.; Verdoes Kleijn, G.; Vriend, W.-J.; Williams, O. R.; Roerdink, J. B. T. M.; Schomaker, L. R. B.; Swertz, M. A.; Tsyganov, A.; van Dijk, G. J. W.

    2017-06-01

    After its first implementation in 2003 the Astro-WISE technology has been rolled out in several European countries and is used for the production of the KiDS survey data. In the multi-disciplinary Target initiative this technology, nicknamed WISE technology, has been further applied to a large number of projects. Here, we highlight the data handling of other astronomical applications, such as VLT-MUSE and LOFAR, together with some non-astronomical applications such as the medical projects Lifelines and GLIMPS; the MONK handwritten text recognition system; and business applications, by amongst others, the Target Holding. We describe some of the most important lessons learned and describe the application of the data-centric WISE type of approach to the Science Ground Segment of the Euclid satellite.

  2. Programmatic Efforts Affect Retention of Women in Science and Engineering

    NASA Astrophysics Data System (ADS)

    Hathaway, Russel S.; Sharp, Sally; Davis, Cinda-Sue

    This article presents findings from a study that investigated the impact of a women in science and engineering residence program (WISE-RP) on the retention of women in science and engineering disciplines. From a matched sample of 1,852 science and engineering students, the authors compared WISE-RP participants with male and female control students for science and engineering retention. The findings suggest a strong connection between WISE-KP participation and science retention, but not engineering retention. The results also indicate that a WISE-RP is more effective in retaining White and Asian students than underrepresented students of color. The authors highlight the importance of combining academic and personal support in a residential learning program and draw implications for retaining women т science, mathematics, and engineering disciplines.

  3. The random coding bound is tight for the average code.

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.

    1973-01-01

    The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.

  4. Comparative analysis of methods for detecting interacting loci

    PubMed Central

    2011-01-01

    Background Interactions among genetic loci are believed to play an important role in disease risk. While many methods have been proposed for detecting such interactions, their relative performance remains largely unclear, mainly because different data sources, detection performance criteria, and experimental protocols were used in the papers introducing these methods and in subsequent studies. Moreover, there have been very few studies strictly focused on comparison of existing methods. Given the importance of detecting gene-gene and gene-environment interactions, a rigorous, comprehensive comparison of performance and limitations of available interaction detection methods is warranted. Results We report a comparison of eight representative methods, of which seven were specifically designed to detect interactions among single nucleotide polymorphisms (SNPs), with the last a popular main-effect testing method used as a baseline for performance evaluation. The selected methods, multifactor dimensionality reduction (MDR), full interaction model (FIM), information gain (IG), Bayesian epistasis association mapping (BEAM), SNP harvester (SH), maximum entropy conditional probability modeling (MECPM), logistic regression with an interaction term (LRIT), and logistic regression (LR) were compared on a large number of simulated data sets, each, consistent with complex disease models, embedding multiple sets of interacting SNPs, under different interaction models. The assessment criteria included several relevant detection power measures, family-wise type I error rate, and computational complexity. There are several important results from this study. First, while some SNPs in interactions with strong effects are successfully detected, most of the methods miss many interacting SNPs at an acceptable rate of false positives. In this study, the best-performing method was MECPM. Second, the statistical significance assessment criteria, used by some of the methods to control the type I error rate, are quite conservative, thereby limiting their power and making it difficult to fairly compare them. Third, as expected, power varies for different models and as a function of penetrance, minor allele frequency, linkage disequilibrium and marginal effects. Fourth, the analytical relationships between power and these factors are derived, aiding in the interpretation of the study results. Fifth, for these methods the magnitude of the main effect influences the power of the tests. Sixth, most methods can detect some ground-truth SNPs but have modest power to detect the whole set of interacting SNPs. Conclusion This comparison study provides new insights into the strengths and limitations of current methods for detecting interacting loci. This study, along with freely available simulation tools we provide, should help support development of improved methods. The simulation tools are available at: http://code.google.com/p/simulation-tool-bmc-ms9169818735220977/downloads/list. PMID:21729295

  5. Comparative analysis of methods for detecting interacting loci.

    PubMed

    Chen, Li; Yu, Guoqiang; Langefeld, Carl D; Miller, David J; Guy, Richard T; Raghuram, Jayaram; Yuan, Xiguo; Herrington, David M; Wang, Yue

    2011-07-05

    Interactions among genetic loci are believed to play an important role in disease risk. While many methods have been proposed for detecting such interactions, their relative performance remains largely unclear, mainly because different data sources, detection performance criteria, and experimental protocols were used in the papers introducing these methods and in subsequent studies. Moreover, there have been very few studies strictly focused on comparison of existing methods. Given the importance of detecting gene-gene and gene-environment interactions, a rigorous, comprehensive comparison of performance and limitations of available interaction detection methods is warranted. We report a comparison of eight representative methods, of which seven were specifically designed to detect interactions among single nucleotide polymorphisms (SNPs), with the last a popular main-effect testing method used as a baseline for performance evaluation. The selected methods, multifactor dimensionality reduction (MDR), full interaction model (FIM), information gain (IG), Bayesian epistasis association mapping (BEAM), SNP harvester (SH), maximum entropy conditional probability modeling (MECPM), logistic regression with an interaction term (LRIT), and logistic regression (LR) were compared on a large number of simulated data sets, each, consistent with complex disease models, embedding multiple sets of interacting SNPs, under different interaction models. The assessment criteria included several relevant detection power measures, family-wise type I error rate, and computational complexity. There are several important results from this study. First, while some SNPs in interactions with strong effects are successfully detected, most of the methods miss many interacting SNPs at an acceptable rate of false positives. In this study, the best-performing method was MECPM. Second, the statistical significance assessment criteria, used by some of the methods to control the type I error rate, are quite conservative, thereby limiting their power and making it difficult to fairly compare them. Third, as expected, power varies for different models and as a function of penetrance, minor allele frequency, linkage disequilibrium and marginal effects. Fourth, the analytical relationships between power and these factors are derived, aiding in the interpretation of the study results. Fifth, for these methods the magnitude of the main effect influences the power of the tests. Sixth, most methods can detect some ground-truth SNPs but have modest power to detect the whole set of interacting SNPs. This comparison study provides new insights into the strengths and limitations of current methods for detecting interacting loci. This study, along with freely available simulation tools we provide, should help support development of improved methods. The simulation tools are available at: http://code.google.com/p/simulation-tool-bmc-ms9169818735220977/downloads/list.

  6. A prospective audit of a nurse independent prescribing within critical care.

    PubMed

    Carberry, Martin; Connelly, Sarah; Murphy, Jennifer

    2013-05-01

    To determine the prescribing activity of different staff groups within intensive care unit (ICU) and combined high dependency unit (HDU), namely trainee and consultant medical staff and advanced nurse practitioners in critical care (ANPCC); to determine the number and type of prescription errors; to compare error rates between prescribing groups and to raise awareness of prescribing activity within critical care. The introduction of government legislation has led to the development of non-medical prescribing roles in acute care. This has facilitated an opportunity for the ANPCC working in critical care to develop a prescribing role. The audit was performed over 7 days (Monday-Sunday), on rolling days over a 7-week period in September and October 2011 in three ICUs. All drug entries made on the ICU prescription by the three groups, trainee medical staff, ANPCCs and consultant anaesthetists, were audited once for errors. Data were collected by reviewing all drug entries for errors namely, patient data, drug dose, concentration, rate and frequency, legibility and prescriber signature. A paper data collection tool was used initially; data was later entered onto a Microsoft Access data base. A total of 1418 drug entries were audited from 77 patient prescription Cardexes. Error rates were reported as, 40 errors in 1418 prescriptions (2·8%): ANPCC errors, n = 2 in 388 prescriptions (0·6%); trainee medical staff errors, n = 33 in 984 (3·4%); consultant errors, n = 5 in 73 (6·8%). The error rates were significantly different for different prescribing groups (p < 0·01). This audit shows that prescribing error rates were low (2·8%). Having the lowest error rate, the nurse practitioners are at least as effective as other prescribing groups within this audit, in terms of errors only, in prescribing diligence. National data is required in order to benchmark independent nurse prescribing practice in critical care. These findings could be used to inform research and role development within the critical care. © 2012 The Authors. Nursing in Critical Care © 2012 British Association of Critical Care Nurses.

  7. Using Medicines Wisely

    MedlinePlus

    ... Consumers Consumer Information by Audience For Women Use Medicines Wisely Share Tweet Linkedin Pin it More sharing ... or foods should I avoid? 2. Keep a Medicine List Write down the important facts about each ...

  8. Accurate and Standardized Coronary Wave Intensity Analysis.

    PubMed

    Rivolo, Simone; Patterson, Tiffany; Asrress, Kaleab N; Marber, Michael; Redwood, Simon; Smith, Nicolas P; Lee, Jack

    2017-05-01

    Coronary wave intensity analysis (cWIA) has increasingly been applied in the clinical research setting to distinguish between the proximal and distal mechanical influences on coronary blood flow. Recently, a cWIA-derived clinical index demonstrated prognostic value in predicting functional recovery postmyocardial infarction. Nevertheless, the known operator dependence of the cWIA metrics currently hampers its routine application in clinical practice. Specifically, it was recently demonstrated that the cWIA metrics are highly dependent on the chosen Savitzky-Golay filter parameters used to smooth the acquired traces. Therefore, a novel method to make cWIA standardized and automatic was proposed and evaluated in vivo. The novel approach combines an adaptive Savitzky-Golay filter with high-order central finite differencing after ensemble-averaging the acquired waveforms. Its accuracy was assessed using in vivo human data. The proposed approach was then modified to automatically perform beat wise cWIA. Finally, the feasibility (accuracy and robustness) of the method was evaluated. The automatic cWIA algorithm provided satisfactory accuracy under a wide range of noise scenarios (≤10% and ≤20% error in the estimation of wave areas and peaks, respectively). These results were confirmed when beat-by-beat cWIA was performed. An accurate, standardized, and automated cWIA was developed. Moreover, the feasibility of beat wise cWIA was demonstrated for the first time. The proposed algorithm provides practitioners with a standardized technique that could broaden the application of cWIA in the clinical practice as enabling multicenter trials. Furthermore, the demonstrated potential of beatwise cWIA opens the possibility investigating the coronary physiology in real time.

  9. A high-throughput urinalysis of abused drugs based on a SPE-LC-MS/MS method coupled with an in-house developed post-analysis data treatment system.

    PubMed

    Cheng, Wing-Chi; Yau, Tsan-Sang; Wong, Ming-Kei; Chan, Lai-Ping; Mok, Vincent King-Kuen

    2006-10-16

    A rapid urinalysis system based on SPE-LC-MS/MS with an in-house post-analysis data management system has been developed for the simultaneous identification and semi-quantitation of opiates (morphine, codeine), methadone, amphetamines (amphetamine, methylamphetamine (MA), 3,4-methylenedioxyamphetamine (MDA) and 3,4-methylenedioxymethamphetamine (MDMA)), 11-benzodiazepines or their metabolites and ketamine. The urine samples are subjected to automated solid phase extraction prior to analysis by LC-MS (Finnigan Surveyor LC connected to a Finnigan LCQ Advantage) fitted with an Alltech Rocket Platinum EPS C-18 column. With a single point calibration at the cut-off concentration for each analyte, simultaneous identification and semi-quantitation for the above mentioned drugs can be achieved in a 10 min run per urine sample. A computer macro-program package was developed to automatically retrieve appropriate data from the analytical data files, compare results with preset values (such as cut-off concentrations, MS matching scores) of each drug being analyzed and generate user-defined Excel reports to indicate all positive and negative results in batch-wise manner for ease of checking. The final analytical results are automatically copied into an Access database for report generation purposes. Through the use of automation in sample preparation, simultaneous identification and semi-quantitation by LC-MS/MS and a tailored made post-analysis data management system, this new urinalysis system significantly improves the quality of results, reduces the post-data treatment time, error due to data transfer and is suitable for high-throughput laboratory in batch-wise operation.

  10. Separate Medication Preparation Rooms Reduce Interruptions and Medication Errors in the Hospital Setting: A Prospective Observational Study.

    PubMed

    Huckels-Baumgart, Saskia; Baumgart, André; Buschmann, Ute; Schüpfer, Guido; Manser, Tanja

    2016-12-21

    Interruptions and errors during the medication process are common, but published literature shows no evidence supporting whether separate medication rooms are an effective single intervention in reducing interruptions and errors during medication preparation in hospitals. We tested the hypothesis that the rate of interruptions and reported medication errors would decrease as a result of the introduction of separate medication rooms. Our aim was to evaluate the effect of separate medication rooms on interruptions during medication preparation and on self-reported medication error rates. We performed a preintervention and postintervention study using direct structured observation of nurses during medication preparation and daily structured medication error self-reporting of nurses by questionnaires in 2 wards at a major teaching hospital in Switzerland. A volunteer sample of 42 nurses was observed preparing 1498 medications for 366 patients over 17 hours preintervention and postintervention on both wards. During 122 days, nurses completed 694 reporting sheets containing 208 medication errors. After the introduction of the separate medication room, the mean interruption rate decreased significantly from 51.8 to 30 interruptions per hour (P < 0.01), and the interruption-free preparation time increased significantly from 1.4 to 2.5 minutes (P < 0.05). Overall, the mean medication error rate per day was also significantly reduced after implementation of the separate medication room from 1.3 to 0.9 errors per day (P < 0.05). The present study showed the positive effect of a hospital-based intervention; after the introduction of the separate medication room, the interruption and medication error rates decreased significantly.

  11. Evaluation of genomic high-throughput sequencing data generated on Illumina HiSeq and Genome Analyzer systems

    PubMed Central

    2011-01-01

    Background The generation and analysis of high-throughput sequencing data are becoming a major component of many studies in molecular biology and medical research. Illumina's Genome Analyzer (GA) and HiSeq instruments are currently the most widely used sequencing devices. Here, we comprehensively evaluate properties of genomic HiSeq and GAIIx data derived from two plant genomes and one virus, with read lengths of 95 to 150 bases. Results We provide quantifications and evidence for GC bias, error rates, error sequence context, effects of quality filtering, and the reliability of quality values. By combining different filtering criteria we reduced error rates 7-fold at the expense of discarding 12.5% of alignable bases. While overall error rates are low in HiSeq data we observed regions of accumulated wrong base calls. Only 3% of all error positions accounted for 24.7% of all substitution errors. Analyzing the forward and reverse strands separately revealed error rates of up to 18.7%. Insertions and deletions occurred at very low rates on average but increased to up to 2% in homopolymers. A positive correlation between read coverage and GC content was found depending on the GC content range. Conclusions The errors and biases we report have implications for the use and the interpretation of Illumina sequencing data. GAIIx and HiSeq data sets show slightly different error profiles. Quality filtering is essential to minimize downstream analysis artifacts. Supporting previous recommendations, the strand-specificity provides a criterion to distinguish sequencing errors from low abundance polymorphisms. PMID:22067484

  12. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  13. Improving the prediction of going concern of Taiwanese listed companies using a hybrid of LASSO with data mining techniques.

    PubMed

    Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De

    2016-01-01

    The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).

  14. Prescription errors before and after introduction of electronic medication alert system in a pediatric emergency department.

    PubMed

    Sethuraman, Usha; Kannikeswaran, Nirupama; Murray, Kyle P; Zidan, Marwan A; Chamberlain, James M

    2015-06-01

    Prescription errors occur frequently in pediatric emergency departments (PEDs).The effect of computerized physician order entry (CPOE) with electronic medication alert system (EMAS) on these is unknown. The objective was to compare prescription errors rates before and after introduction of CPOE with EMAS in a PED. The hypothesis was that CPOE with EMAS would significantly reduce the rate and severity of prescription errors in the PED. A prospective comparison of a sample of outpatient, medication prescriptions 5 months before and after CPOE with EMAS implementation (7,268 before and 7,292 after) was performed. Error types and rates, alert types and significance, and physician response were noted. Medication errors were deemed significant if there was a potential to cause life-threatening injury, failure of therapy, or an adverse drug effect. There was a significant reduction in the errors per 100 prescriptions (10.4 before vs. 7.3 after; absolute risk reduction = 3.1, 95% confidence interval [CI] = 2.2 to 4.0). Drug dosing error rates decreased from 8 to 5.4 per 100 (absolute risk reduction = 2.6, 95% CI = 1.8 to 3.4). Alerts were generated for 29.6% of prescriptions, with 45% involving drug dose range checking. The sensitivity of CPOE with EMAS in identifying errors in prescriptions was 45.1% (95% CI = 40.8% to 49.6%), and the specificity was 57% (95% CI = 55.6% to 58.5%). Prescribers modified 20% of the dosing alerts, resulting in the error not reaching the patient. Conversely, 11% of true dosing alerts for medication errors were overridden by the prescribers: 88 (11.3%) resulted in medication errors, and 684 (88.6%) were false-positive alerts. A CPOE with EMAS was associated with a decrease in overall prescription errors in our PED. Further system refinements are required to reduce the high false-positive alert rates. © 2015 by the Society for Academic Emergency Medicine.

  15. Performance improvement of robots using a learning control scheme

    NASA Technical Reports Server (NTRS)

    Krishna, Ramuhalli; Chiang, Pen-Tai; Yang, Jackson C. S.

    1987-01-01

    Many applications of robots require that the same task be repeated a number of times. In such applications, the errors associated with one cycle are also repeated every cycle of the operation. An off-line learning control scheme is used here to modify the command function which would result in smaller errors in the next operation. The learning scheme is based on a knowledge of the errors and error rates associated with each cycle. Necessary conditions for the iterative scheme to converge to zero errors are derived analytically considering a second order servosystem model. Computer simulations show that the errors are reduced at a faster rate if the error rate is included in the iteration scheme. The results also indicate that the scheme may increase the magnitude of errors if the rate information is not included in the iteration scheme. Modification of the command input using a phase and gain adjustment is also proposed to reduce the errors with one attempt. The scheme is then applied to a computer model of a robot system similar to PUMA 560. Improved performance of the robot is shown by considering various cases of trajectory tracing. The scheme can be successfully used to improve the performance of actual robots within the limitations of the repeatability and noise characteristics of the robot.

  16. Improved compliance with the World Health Organization Surgical Safety Checklist is associated with reduced surgical specimen labelling errors.

    PubMed

    Martis, Walston R; Hannam, Jacqueline A; Lee, Tracey; Merry, Alan F; Mitchell, Simon J

    2016-09-09

    A new approach to administering the surgical safety checklist (SSC) at our institution using wall-mounted charts for each SSC domain coupled with migrated leadership among operating room (OR) sub-teams, led to improved compliance with the Sign Out domain. Since surgical specimens are reviewed at Sign Out, we aimed to quantify any related change in surgical specimen labelling errors. Prospectively maintained error logs for surgical specimens sent to pathology were examined for the six months before and after introduction of the new SSC administration paradigm. We recorded errors made in the labelling or completion of the specimen pot and on the specimen laboratory request form. Total error rates were calculated from the number of errors divided by total number of specimens. Rates from the two periods were compared using a chi square test. There were 19 errors in 4,760 specimens (rate 3.99/1,000) and eight errors in 5,065 specimens (rate 1.58/1,000) before and after the change in SSC administration paradigm (P=0.0225). Improved compliance with administering the Sign Out domain of the SSC can reduce surgical specimen errors. This finding provides further evidence that OR teams should optimise compliance with the SSC.

  17. Citation Help in Databases: The More Things Change, the More They Stay the Same

    ERIC Educational Resources Information Center

    Van Ullen, Mary; Kessler, Jane

    2012-01-01

    In 2005, the authors reviewed citation help in databases and found an error rate of 4.4 errors per citation. This article describes a follow-up study that revealed a modest improvement in the error rate to 3.4 errors per citation, still unacceptably high. The most problematic area was retrieval statements. The authors conclude that librarians…

  18. Mimicking Aphasic Semantic Errors in Normal Speech Production: Evidence from a Novel Experimental Paradigm

    ERIC Educational Resources Information Center

    Hodgson, Catherine; Lambon Ralph, Matthew A.

    2008-01-01

    Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study…

  19. Physical fault tolerance of nanoelectronics.

    PubMed

    Szkopek, Thomas; Roychowdhury, Vwani P; Antoniadis, Dimitri A; Damoulakis, John N

    2011-04-29

    The error rate in complementary transistor circuits is suppressed exponentially in electron number, arising from an intrinsic physical implementation of fault-tolerant error correction. Contrariwise, explicit assembly of gates into the most efficient known fault-tolerant architecture is characterized by a subexponential suppression of error rate with electron number, and incurs significant overhead in wiring and complexity. We conclude that it is more efficient to prevent logical errors with physical fault tolerance than to correct logical errors with fault-tolerant architecture.

  20. Comparison of Agar Dilution, Disk Diffusion, MicroScan, and Vitek Antimicrobial Susceptibility Testing Methods to Broth Microdilution for Detection of Fluoroquinolone-Resistant Isolates of the Family Enterobacteriaceae

    PubMed Central

    Steward, Christine D.; Stocker, Sheila A.; Swenson, Jana M.; O’Hara, Caroline M.; Edwards, Jonathan R.; Gaynes, Robert P.; McGowan, John E.; Tenover, Fred C.

    1999-01-01

    Fluoroquinolone resistance appears to be increasing in many species of bacteria, particularly in those causing nosocomial infections. However, the accuracy of some antimicrobial susceptibility testing methods for detecting fluoroquinolone resistance remains uncertain. Therefore, we compared the accuracy of the results of agar dilution, disk diffusion, MicroScan Walk Away Neg Combo 15 conventional panels, and Vitek GNS-F7 cards to the accuracy of the results of the broth microdilution reference method for detection of ciprofloxacin and ofloxacin resistance in 195 clinical isolates of the family Enterobacteriaceae collected from six U.S. hospitals for a national surveillance project (Project ICARE [Intensive Care Antimicrobial Resistance Epidemiology]). For ciprofloxacin, very major error rates were 0% (disk diffusion and MicroScan), 0.9% (agar dilution), and 2.7% (Vitek), while major error rates ranged from 0% (agar dilution) to 3.7% (MicroScan and Vitek). Minor error rates ranged from 12.3% (agar dilution) to 20.5% (MicroScan). For ofloxacin, no very major errors were observed, and major errors were noted only with MicroScan (3.7% major error rate). Minor error rates ranged from 8.2% (agar dilution) to 18.5% (Vitek). Minor errors for all methods were substantially reduced when results with MICs within ±1 dilution of the broth microdilution reference MIC were excluded from analysis. However, the high number of minor errors by all test systems remains a concern. PMID:9986809

Top