Sample records for character error rate

  1. The dependence of crowding on flanker complexity and target-flanker similarity

    PubMed Central

    Bernard, Jean-Baptiste; Chung, Susana T.L.

    2013-01-01

    We examined the effects of the spatial complexity of flankers and target-flanker similarity on the performance of identifying crowded letters. On each trial, observers identified the middle character of random strings of three characters (“trigrams”) briefly presented at 10° below fixation. We tested the 26 lowercase letters of the Times-Roman and Courier fonts, a set of 79 characters (letters and non-letters) of the Times-Roman font, and the uppercase letters of two highly complex ornamental fonts, Edwardian and Aristocrat. Spatial complexity of characters was quantified by the length of the morphological skeleton of each character, and target-flanker similarity was defined based on a psychometric similarity matrix. Our results showed that (1) letter identification error rate increases with flanker complexity up to a certain value, beyond which error rate becomes independent of flanker complexity; (2) the increase of error rate is slower for high-complexity target letters; (3) error rate increases with target-flanker similarity; and (4) mislocation error rate increases with target-flanker similarity. These findings, combined with the current understanding of the faulty feature integration account of crowding, provide some constraints of how the feature integration process could cause perceptual errors. PMID:21730225

  2. Robust keyword retrieval method for OCRed text

    NASA Astrophysics Data System (ADS)

    Fujii, Yusaku; Takebe, Hiroaki; Tanaka, Hiroshi; Hotta, Yoshinobu

    2011-01-01

    Document management systems have become important because of the growing popularity of electronic filing of documents and scanning of books, magazines, manuals, etc., through a scanner or a digital camera, for storage or reading on a PC or an electronic book. Text information acquired by optical character recognition (OCR) is usually added to the electronic documents for document retrieval. Since texts generated by OCR generally include character recognition errors, robust retrieval methods have been introduced to overcome this problem. In this paper, we propose a retrieval method that is robust against both character segmentation and recognition errors. In the proposed method, the insertion of noise characters and dropping of characters in the keyword retrieval enables robustness against character segmentation errors, and character substitution in the keyword of the recognition candidate for each character in OCR or any other character enables robustness against character recognition errors. The recall rate of the proposed method was 15% higher than that of the conventional method. However, the precision rate was 64% lower.

  3. A Developmental Study of Chinese Children's Word and Character Reading.

    PubMed

    Li, Tong; Wang, Ying; Tong, Xiuhong; McBride, Catherine

    2017-02-01

    To investigate the relationship between Chinese children's character and word reading, 62 third and 50 fifth grade children in Hong Kong were asked to read single characters and words that were comprised of these characters. Results showed that words helped children to recognize characters for both grades of children. Compared to older children, younger children's character reading was more likely to rely on the word that the character appeared in as a component; younger children made more errors in confusing the correct character with words related to it. Younger children's greater rate of meaning-related errors also underscored the role of the word in their character learning. This study confirmed the important role of words in children's character learning, and provided evidence for a developmental pattern of character and word reading in Chinese.

  4. Postprocessing for character recognition using pattern features and linguistic information

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Takatoshi; Okamoto, Masayosi; Horii, Hiroshi

    1993-04-01

    We propose a new method of post-processing for character recognition using pattern features and linguistic information. This method corrects errors in the recognition of handwritten Japanese sentences containing Kanji characters. This post-process method is characterized by having two types of character recognition. Improving the accuracy of the character recognition rate of Japanese characters is made difficult by the large number of characters, and the existence of characters with similar patterns. Therefore, it is not practical for a character recognition system to recognize all characters in detail. First, this post-processing method generates a candidate character table by recognizing the simplest features of characters. Then, it selects words corresponding to the character from the candidate character table by referring to a word and grammar dictionary before selecting suitable words. If the correct character is included in the candidate character table, this process can correct an error, however, if the character is not included, it cannot correct an error. Therefore, if this method can presume a character does not exist in a candidate character table by using linguistic information (word and grammar dictionary). It then can verify a presumed character by character recognition using complex features. When this method is applied to an online character recognition system, the accuracy of character recognition improves 93.5% to 94.7%. This proved to be the case when it was used for the editorials of a Japanese newspaper (Asahi Shinbun).

  5. Identification of Matra Region and Overlapping Characters for OCR of Printed Bengali Scripts

    NASA Astrophysics Data System (ADS)

    Goswami, Subhra Sundar

    One of the important reasons for poor recognition rate in optical character recognition (OCR) system is the error in character segmentation. In case of Bangla scripts, the errors occur due to several reasons, which include incorrect detection of matra (headline), over-segmentation and under-segmentation. We have proposed a robust method for detecting the headline region. Existence of overlapping characters (in under-segmented parts) in scanned printed documents is a major problem in designing an effective character segmentation procedure for OCR systems. In this paper, a predictive algorithm is developed for effectively identifying overlapping characters and then selecting the cut-borders for segmentation. Our method can be successfully used in achieving high recognition result.

  6. Narrative-compression coding for a channel with errors. Professional paper for period ending June 1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bond, J.W.

    1988-01-01

    Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident onmore » an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.« less

  7. The Uncanny Valley Does Not Interfere with Level 1 Visual Perspective Taking

    PubMed Central

    MacDorman, Karl F.; Srinivas, Preethi; Patel, Himalaya

    2014-01-01

    When a computer-animated human character looks eerily realistic, viewers report a loss of empathy; they have difficulty taking the character’s perspective. To explain this perspective-taking impairment, known as the uncanny valley, a novel theory is proposed: The more human or less eerie a character looks, the more it interferes with level 1 visual perspective taking when the character’s perspective differs from that of the human observer (e.g., because the character competitively activates shared circuits in the observer’s brain). The proposed theory is evaluated in three experiments involving a dot-counting task in which participants either assumed or ignored the perspective of characters varying in their human photorealism and eeriness. Although response times and error rates were lower when the number of dots faced by the observer and character were the same (congruent condition) than when they were different (incongruent condition), no consistent pattern emerged between the human photorealism or eeriness of the characters and participants’ response times and error rates. Thus, the proposed theory is unsupported for level 1 visual perspective taking. As the effects of the uncanny valley on empathy have not previously been investigated systematically, these results provide evidence to eliminate one possible explanation. PMID:25221383

  8. Kurzweil Reading Machine: A Partial Evaluation of Its Optical Character Recognition Error Rate.

    ERIC Educational Resources Information Center

    Goodrich, Gregory L.; And Others

    1979-01-01

    A study designed to assess the ability of the Kurzweil reading machine (a speech reading device for the visually handicapped) to read three different type styles produced by five different means indicated that the machines tested had different error rates depending upon the means of producing the copy and upon the type style used. (Author/CL)

  9. Effect of refractive error on temperament and character properties.

    PubMed

    Kalkan Akcay, Emine; Canan, Fatih; Simavli, Huseyin; Dal, Derya; Yalniz, Hacer; Ugurlu, Nagihan; Gecici, Omer; Cagil, Nurullah

    2015-01-01

    To determine the effect of refractive error on temperament and character properties using Cloninger's psychobiological model of personality. Using the Temperament and Character Inventory (TCI), the temperament and character profiles of 41 participants with refractive errors (17 with myopia, 12 with hyperopia, and 12 with myopic astigmatism) were compared to those of 30 healthy control participants. Here, temperament comprised the traits of novelty seeking, harm-avoidance, and reward dependence, while character comprised traits of self-directedness, cooperativeness, and self-transcendence. Participants with refractive error showed significantly lower scores on purposefulness, cooperativeness, empathy, helpfulness, and compassion (P<0.05, P<0.01, P<0.05, P<0.05, and P<0.01, respectively). Refractive error might have a negative influence on some character traits, and different types of refractive error might have different temperament and character properties. These personality traits may be implicated in the onset and/or perpetuation of refractive errors and may be a productive focus for psychotherapy.

  10. Direct Measures of Character Mislocalizations with Masked/Unmasked Exposures.

    ERIC Educational Resources Information Center

    Chastain, Garvin; And Others

    Butler (1980) compared errors representing intrusions and mislocalizations on 3x3 letter displays under pattern-mask versus no-mask conditions and found that pattern masking increased character mislocalization errors (naming a character in the display but not in the target position as being the target) over intrusion errors (naming a character not…

  11. Error-Free Text Typing Performance of an Inductive Intra-Oral Tongue Computer Interface for Severely Disabled Individuals.

    PubMed

    Andreasen Struijk, Lotte N S; Bentsen, Bo; Gaihede, Michael; Lontis, Eugen R

    2017-11-01

    For severely paralyzed individuals, alternative computer interfaces are becoming increasingly essential for everyday life as social and vocational activities are facilitated by information technology and as the environment becomes more automatic and remotely controllable. Tongue computer interfaces have proven to be desirable by the users partly due to their high degree of aesthetic acceptability, but so far the mature systems have shown a relatively low error-free text typing efficiency. This paper evaluated the intra-oral inductive tongue computer interface (ITCI) in its intended use: Error-free text typing in a generally available text editing system, Word. Individuals with tetraplegia and able bodied individuals used the ITCI for typing using a MATLAB interface and for Word typing for 4 to 5 experimental days, and the results showed an average error-free text typing rate in Word of 11.6 correct characters/min across all participants and of 15.5 correct characters/min for participants familiar with tongue piercings. Improvements in typing rates between the sessions suggest that typing ratescan be improved further through long-term use of the ITCI.

  12. Electrooculography-based continuous eye-writing recognition system for efficient assistive communication systems

    PubMed Central

    Shinozaki, Takahiro

    2018-01-01

    Human-computer interface systems whose input is based on eye movements can serve as a means of communication for patients with locked-in syndrome. Eye-writing is one such system; users can input characters by moving their eyes to follow the lines of the strokes corresponding to characters. Although this input method makes it easy for patients to get started because of their familiarity with handwriting, existing eye-writing systems suffer from slow input rates because they require a pause between input characters to simplify the automatic recognition process. In this paper, we propose a continuous eye-writing recognition system that achieves a rapid input rate because it accepts characters eye-written continuously, with no pauses. For recognition purposes, the proposed system first detects eye movements using electrooculography (EOG), and then a hidden Markov model (HMM) is applied to model the EOG signals and recognize the eye-written characters. Additionally, this paper investigates an EOG adaptation that uses a deep neural network (DNN)-based HMM. Experiments with six participants showed an average input speed of 27.9 character/min using Japanese Katakana as the input target characters. A Katakana character-recognition error rate of only 5.0% was achieved using 13.8 minutes of adaptation data. PMID:29425248

  13. A Robust Semi-Parametric Test for Detecting Trait-Dependent Diversification.

    PubMed

    Rabosky, Daniel L; Huang, Huateng

    2016-03-01

    Rates of species diversification vary widely across the tree of life and there is considerable interest in identifying organismal traits that correlate with rates of speciation and extinction. However, it has been challenging to develop methodological frameworks for testing hypotheses about trait-dependent diversification that are robust to phylogenetic pseudoreplication and to directionally biased rates of character change. We describe a semi-parametric test for trait-dependent diversification that explicitly requires replicated associations between character states and diversification rates to detect effects. To use the method, diversification rates are reconstructed across a phylogenetic tree with no consideration of character states. A test statistic is then computed to measure the association between species-level traits and the corresponding diversification rate estimates at the tips of the tree. The empirical value of the test statistic is compared to a null distribution that is generated by structured permutations of evolutionary rates across the phylogeny. The test is applicable to binary discrete characters as well as continuous-valued traits and can accommodate extremely sparse sampling of character states at the tips of the tree. We apply the test to several empirical data sets and demonstrate that the method has acceptable Type I error rates. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Does the cost function matter in Bayes decision rule?

    PubMed

    Schlü ter, Ralf; Nussbaum-Thom, Markus; Ney, Hermann

    2012-02-01

    In many tasks in pattern recognition, such as automatic speech recognition (ASR), optical character recognition (OCR), part-of-speech (POS) tagging, and other string recognition tasks, we are faced with a well-known inconsistency: The Bayes decision rule is usually used to minimize string (symbol sequence) error, whereas, in practice, we want to minimize symbol (word, character, tag, etc.) error. When comparing different recognition systems, we do indeed use symbol error rate as an evaluation measure. The topic of this work is to analyze the relation between string (i.e., 0-1) and symbol error (i.e., metric, integer valued) cost functions in the Bayes decision rule, for which fundamental analytic results are derived. Simple conditions are derived for which the Bayes decision rule with integer-valued metric cost function and with 0-1 cost gives the same decisions or leads to classes with limited cost. The corresponding conditions can be tested with complexity linear in the number of classes. The results obtained do not make any assumption w.r.t. the structure of the underlying distributions or the classification problem. Nevertheless, the general analytic results are analyzed via simulations of string recognition problems with Levenshtein (edit) distance cost function. The results support earlier findings that considerable improvements are to be expected when initial error rates are high.

  15. Identifying hidden rate changes in the evolution of a binary morphological character: the evolution of plant habit in campanulid angiosperms.

    PubMed

    Beaulieu, Jeremy M; O'Meara, Brian C; Donoghue, Michael J

    2013-09-01

    The growth of phylogenetic trees in scope and in size is promising from the standpoint of understanding a wide variety of evolutionary patterns and processes. With trees comprised of larger, older, and globally distributed clades, it is likely that the lability of a binary character will differ significantly among lineages, which could lead to errors in estimating transition rates and the associated inference of ancestral states. Here we develop and implement a new method for identifying different rates of evolution in a binary character along different branches of a phylogeny. We illustrate this approach by exploring the evolution of growth habit in Campanulidae, a flowering plant clade containing some 35,000 species. The distribution of woody versus herbaceous species calls into question the use of traditional models of binary character evolution. The recognition and accommodation of changes in the rate of growth form evolution in different lineages demonstrates, for the first time, a robust picture of growth form evolution across a very large, very old, and very widespread flowering plant clade.

  16. Study of style effects on OCR errors in the MEDLINE database

    NASA Astrophysics Data System (ADS)

    Garrison, Penny; Davis, Diane L.; Andersen, Tim L.; Barney Smith, Elisa H.

    2005-01-01

    The National Library of Medicine has developed a system for the automatic extraction of data from scanned journal articles to populate the MEDLINE database. Although the 5-engine OCR system used in this process exhibits good performance overall, it does make errors in character recognition that must be corrected in order for the process to achieve the requisite accuracy. The correction process works by feeding words that have characters with less than 100% confidence (as determined automatically by the OCR engine) to a human operator who then must manually verify the word or correct the error. The majority of these errors are contained in the affiliation information zone where the characters are in italics or small fonts. Therefore only affiliation information data is used in this research. This paper examines the correlation between OCR errors and various character attributes in the MEDLINE database, such as font size, italics, bold, etc. and OCR confidence levels. The motivation for this research is that if a correlation between the character style and types of errors exists it should be possible to use this information to improve operator productivity by increasing the probability that the correct word option is presented to the human editor. We have determined that this correlation exists, in particular for the case of characters with diacritics.

  17. Post processing for offline Chinese handwritten character string recognition

    NASA Astrophysics Data System (ADS)

    Wang, YanWei; Ding, XiaoQing; Liu, ChangSong

    2012-01-01

    Offline Chinese handwritten character string recognition is one of the most important research fields in pattern recognition. Due to the free writing style, large variability in character shapes and different geometric characteristics, Chinese handwritten character string recognition is a challenging problem to deal with. However, among the current methods over-segmentation and merging method which integrates geometric information, character recognition information and contextual information, shows a promising result. It is found experimentally that a large part of errors are segmentation error and mainly occur around non-Chinese characters. In a Chinese character string, there are not only wide characters namely Chinese characters, but also narrow characters like digits and letters of the alphabet. The segmentation error is mainly caused by uniform geometric model imposed on all segmented candidate characters. To solve this problem, post processing is employed to improve recognition accuracy of narrow characters. On one hand, multi-geometric models are established for wide characters and narrow characters respectively. Under multi-geometric models narrow characters are not prone to be merged. On the other hand, top rank recognition results of candidate paths are integrated to boost final recognition of narrow characters. The post processing method is investigated on two datasets, in total 1405 handwritten address strings. The wide character recognition accuracy has been improved lightly and narrow character recognition accuracy has been increased up by 10.41% and 10.03% respectively. It indicates that the post processing method is effective to improve recognition accuracy of narrow characters.

  18. Effects of the "Beauty Is Good" Stereotype on Children's Information Processing.

    ERIC Educational Resources Information Center

    Ramsey, Jennifer L.; Langlois, Judith H.

    2002-01-01

    Two studies examined schematic information processing as a function of attractiveness stereotyping among 3- to 7- year-olds. Found that children made more errors identifying female characters with stereotype-inconsistent traits but either did just the opposite with male characters or had no difference in errors with male characters. Findings pose…

  19. Robust recognition of degraded machine-printed characters using complementary similarity measure and error-correction learning

    NASA Astrophysics Data System (ADS)

    Hagita, Norihiro; Sawaki, Minako

    1995-03-01

    Most conventional methods in character recognition extract geometrical features such as stroke direction, connectivity of strokes, etc., and compare them with reference patterns in a stored dictionary. Unfortunately, geometrical features are easily degraded by blurs, stains and the graphical background designs used in Japanese newspaper headlines. This noise must be removed before recognition commences, but no preprocessing method is completely accurate. This paper proposes a method for recognizing degraded characters and characters printed on graphical background designs. This method is based on the binary image feature method and uses binary images as features. A new similarity measure, called the complementary similarity measure, is used as a discriminant function. It compares the similarity and dissimilarity of binary patterns with reference dictionary patterns. Experiments are conducted using the standard character database ETL-2 which consists of machine-printed Kanji, Hiragana, Katakana, alphanumeric, an special characters. The results show that this method is much more robust against noise than the conventional geometrical feature method. It also achieves high recognition rates of over 92% for characters with textured foregrounds, over 98% for characters with textured backgrounds, over 98% for outline fonts, and over 99% for reverse contrast characters.

  20. Automatic feature design for optical character recognition using an evolutionary search procedure.

    PubMed

    Stentiford, F W

    1985-03-01

    An automatic evolutionary search is applied to the problem of feature extraction in an OCR application. A performance measure based on feature independence is used to generate features which do not appear to suffer from peaking effects [17]. Features are extracted from a training set of 30 600 machine printed 34 class alphanumeric characters derived from British mail. Classification results on the training set and a test set of 10 200 characters are reported for an increasing number of features. A 1.01 percent forced decision error rate is obtained on the test data using 316 features. The hardware implementation should be cheap and fast to operate. The performance compares favorably with current low cost OCR page readers.

  1. Assessment of the relative merits of a few methods to detect evolutionary trends.

    PubMed

    Laurin, Michel

    2010-12-01

    Some of the most basic questions about the history of life concern evolutionary trends. These include determining whether or not metazoans have become more complex over time, whether or not body size tends to increase over time (the Cope-Depéret rule), or whether or not brain size has increased over time in various taxa, such as mammals and birds. Despite the proliferation of studies on such topics, assessment of the reliability of results in this field is hampered by the variability of techniques used and the lack of statistical validation of these methods. To solve this problem, simulations are performed using a variety of evolutionary models (gradual Brownian motion, speciational Brownian motion, and Ornstein-Uhlenbeck), with or without a drift of variable amplitude, with variable variance of tips, and with bounds placed close or far from the starting values and final means of simulated characters. These are used to assess the relative merits (power, Type I error rate, bias, and mean absolute value of error on slope estimate) of several statistical methods that have recently been used to assess the presence of evolutionary trends in comparative data. Results show widely divergent performance of the methods. The simple, nonphylogenetic regression (SR) and variance partitioning using phylogenetic eigenvector regression (PVR) with a broken stick selection procedure have greatly inflated Type I error rate (0.123-0.180 at a 0.05 threshold), which invalidates their use in this context. However, they have the greatest power. Most variants of Felsenstein's independent contrasts (FIC; five of which are presented) have adequate Type I error rate, although two have a slightly inflated Type I error rate with at least one of the two reference trees (0.064-0.090 error rate at a 0.05 threshold). The power of all contrast-based methods is always much lower than that of SR and PVR, except under Brownian motion with a strong trend and distant bounds. Mean absolute value of error on slope of all FIC methods is slightly higher than that of phylogenetic generalized least squares (PGLS), SR, and PVR. PGLS performs well, with low Type I error rate, low error on regression coefficient, and power comparable with some FIC methods. Four variants of skewness analysis are examined, and a new method to assess significance of results is presented. However, all have consistently low power, except in rare combinations of trees, trend strength, and distance between final means and bounds. Globally, the results clearly show that FIC-based methods and PGLS are globally better than nonphylogenetic methods and variance partitioning with PVR. FIC methods and PGLS are sensitive to the model of evolution (and, hence, to branch length errors). Our results suggest that regressing raw character contrasts against raw geological age contrasts yields a good combination of power and Type I error rate. New software to facilitate batch analysis is presented.

  2. A QUICK KEY TO THE SUBFAMILIES AND GENERA OF ANTS OF THE SAVANNAH RIVER SITE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, D

    2007-09-04

    This taxonomic key was devised to support development of a Rapid Bioassessment Protocol using ants at the Savannah River Site. The emphasis is on 'rapid' and, because the available keys contained a very large number of genera not known to occur at the Savannah River Site, we found that the available keys were unwieldy. Because these keys contained many more genera than we would ever encounter and because this larger number of genera required more couplets in the key and often required examination of characters that are difficult to assess without higher magnifications (60X or higher), more time was requiredmore » to process samples. In developing this set of keys I emphasized character states that are easier for nonspecialists to recognize. I recognize that the character sets used may lead to some errors but I believe that the error rate will be small and, for the purpose of rapid bioassessment, this error rate will be acceptable provided that overall sample sizes are adequate. Oliver and Beattie (1996a, 1996b) found that for rapid assessment of biodiversity the same results were found when identifications were done to morphospecies by people with minimal expertise as when the same data sets were identified by subject matter experts. Basset et al. (2004) concluded that it was not as important to correctly identify all species as it was to be sure that the study included as many functional groups as possible. If your study requires high levels of accuracy, it is highly recommended that, when you key out a specimen and have any doubts concerning the identification, you should refer to keys in Bolton (1994) or to the other keys used to develop this area specific taxonomic key.« less

  3. ChromatoGate: A Tool for Detecting Base Mis-Calls in Multiple Sequence Alignments by Semi-Automatic Chromatogram Inspection

    PubMed Central

    Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros

    2013-01-01

    Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors. PMID:24688709

  4. ChromatoGate: A Tool for Detecting Base Mis-Calls in Multiple Sequence Alignments by Semi-Automatic Chromatogram Inspection.

    PubMed

    Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros

    2013-01-01

    Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors.

  5. Word-level recognition of multifont Arabic text using a feature vector matching approach

    NASA Astrophysics Data System (ADS)

    Erlandson, Erik J.; Trenkle, John M.; Vogt, Robert C., III

    1996-03-01

    Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.

  6. A predictability study of Lorenz's 28-variable model as a dynamical system

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, V.

    1993-01-01

    The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.

  7. A paper form processing system with an error correcting function for reading handwritten Kanji strings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsumi Marukawa; Kazuki Nakashima; Masashi Koga

    1994-12-31

    This paper presents a paper form processing system with an error correcting function for reading handwritten kanji strings. In the paper form processing system, names and addresses are important key data, and especially this paper takes up an error correcting method for name and address recognition. The method automatically corrects errors of the kanji OCR (Optical Character Reader) with the help of word dictionaries and other knowledge. Moreover, it allows names and addresses to be written in any style. The method consists of word matching {open_quotes}furigana{close_quotes} verification for name strings, and address approval for address strings. For word matching, kanjimore » name candidates are extracted by automaton-type word matching. In {open_quotes}furigana{close_quotes} verification, kana candidate characters recognized by the kana OCR are compared with kana`s searched from the name dictionary based on kanji name candidates, given by the word matching. The correct name is selected from the results of word matching and furigana verification. Also, the address approval efficiently searches for the right address based on a bottom-up procedure which follows hierarchical relations from a lower placename to a upper one by using the positional condition among the placenames. We ascertained that the error correcting method substantially improves the recognition rate and processing speed in experiments on 5,032 forms.« less

  8. Post processing of optically recognized text via second order hidden Markov model

    NASA Astrophysics Data System (ADS)

    Poudel, Srijana

    In this thesis, we describe a postprocessing system on Optical Character Recognition(OCR) generated text. Second Order Hidden Markov Model (HMM) approach is used to detect and correct the OCR related errors. The reason for choosing the 2nd order HMM is to keep track of the bigrams so that the model can represent the system more accurately. Based on experiments with training data of 159,733 characters and testing of 5,688 characters, the model was able to correct 43.38 % of the errors with a precision of 75.34 %. However, the precision value indicates that the model introduced some new errors, decreasing the correction percentage to 26.4%.

  9. Initial Ship Design Using a Pearson Correlation Coefficient and Artificial Intelligence Techniques

    NASA Astrophysics Data System (ADS)

    Moon, Byung Young; Kim, Soo Young; Kang, Gyung Ju

    In this paper we analyzed correlation between geometrical character and resistance, and effective horse power by using Pearson correlation coefficient which is one of the data mining methods. Also we made input data to ship's geometrical character which has strong correlation with output data. We calculated effective horse power and resistance by using Neuro-Fuzzy system. To verify the calculation, 9 of 11 container ships' data were improved as data of Neuro-Fuzzy system and the others were improved as verification data. After analyzing rate of error between existing data and calculation data, we concluded that calculation data have sound agreement with existing data.

  10. P2 and behavioral effects of stroke count in Chinese characters: Evidence for an analytic and attentional view.

    PubMed

    Yang, Shasha; Zhang, Shunmei; Wang, Quanhong

    2016-08-15

    The inconsistent stroke-count effect in Chinese character recognition has resulted in an intense debate between the analytic and holistic views of character processing. The length effects of English words on behavioral responses and event-related potentials (ERPs) are similarly inconclusive. In this study, we identified any behavioral and ERP stroke-count effects when orthographic neighborhood sizes are balanced across three stroke counts. A delayed character-matching task was conducted while ERPs were recorded. The behavioral data indicated that both response latency and error rate increased with increasing stroke count. The ERP data showed higher P2 but lower N2 amplitudes in the large count than in the median count condition. A higher P2 can reflect increased attentional load and reduced attentional resource for processing each stroke because of the additional strokes in the large count condition. The behavioral and ERP effects of stroke count provide evidence for the analytic view of character processing but also provide evidence against the holistic view. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Confabulation and epistemic authority.

    PubMed

    Robins, Sarah

    2018-01-01

    Mahr & Csibra (M&C) claim that episodic remembering's autonoetic character serves as an indicator of epistemic authority. This proposal is difficult to reconcile with the existence of confabulation errors - where participants fabricate memories of experiences that never happened to them. Making confabulation errors damages one's epistemic authority, but these false memories have an autonoetic character.

  12. Age determination of mallards

    USGS Publications Warehouse

    Krapu, G.L.; Johnson, D.H.; Dane, C.W.

    1979-01-01

    A technique for distinguishing adult from yearling wild mallards (Anas platyrhynchos), from late winter through the nesting season, was developed by applying discriminant analysis procedures to selected wing feather characters of 126 yearlings and 76 adults (2-year-olds) hand-reared from wild eggs during 1974, 1975, and 1977. Average values for feather characters generally increased as the birds advanced from yearlings to adults. Black-white surface area of greater secondary covert 2 was the single most reliable aging character identified during the study. The error rate was lowest in females (3%) when discriminant functions were used with measurements of primary 1 weight and black-white area of greater secondary covert 2 and in males (9%) when the functions were used with black-white area of greater secondary coverts 1, 2, and 3. Methodology precludes aging of birds in the field during capture operations.

  13. Learning curve of speech recognition.

    PubMed

    Kauppinen, Tomi A; Kaipio, Johanna; Koivikko, Mika P

    2013-12-01

    Speech recognition (SR) speeds patient care processes by reducing report turnaround times. However, concerns have emerged about prolonged training and an added secretarial burden for radiologists. We assessed how much proofing radiologists who have years of experience with SR and radiologists new to SR must perform, and estimated how quickly the new users become as skilled as the experienced users. We studied SR log entries for 0.25 million reports from 154 radiologists and after careful exclusions, defined a group of 11 experienced radiologists and 71 radiologists new to SR (24,833 and 122,093 reports, respectively). Data were analyzed for sound file and report lengths, character-based error rates, and words unknown to the SR's dictionary. Experienced radiologists corrected 6 characters for each report and for new users, 11. Some users presented a very unfavorable learning curve, with error rates not declining as expected. New users' reports were longer, and data for the experienced users indicates that their reports, initially equally lengthy, shortened over a period of several years. For most radiologists, only minor corrections of dictated reports were necessary. While new users adopted SR quickly, with a subset outperforming experienced users from the start, identification of users struggling with SR will help facilitate troubleshooting and support.

  14. Printed Arabic optical character segmentation

    NASA Astrophysics Data System (ADS)

    Mohammad, Khader; Ayyesh, Muna; Qaroush, Aziz; Tumar, Iyad

    2015-03-01

    A considerable progress in recognition techniques for many non-Arabic characters has been achieved. In contrary, few efforts have been put on the research of Arabic characters. In any Optical Character Recognition (OCR) system the segmentation step is usually the essential stage in which an extensive portion of processing is devoted and a considerable share of recognition errors is attributed. In this research, a novel segmentation approach for machine Arabic printed text with diacritics is proposed. The proposed method reduces computation, errors, gives a clear description for the sub-word and has advantages over using the skeleton approach in which the data and information of the character can be lost. Both of initial evaluation and testing of the proposed method have been developed using MATLAB and shows 98.7% promising results.

  15. Improving semi-text-independent method of writer verification using difference vector

    NASA Astrophysics Data System (ADS)

    Li, Xin; Ding, Xiaoqing

    2009-01-01

    The semi-text-independent method of writer verification based on the linear framework is a method that can use all characters of two handwritings to discriminate the writers in the condition of knowing the text contents. The handwritings are allowed to just have small numbers of even totally different characters. This fills the vacancy of the classical text-dependent methods and the text-independent methods of writer verification. Moreover, the information, what every character is, is used for the semi-text-independent method in this paper. Two types of standard templates, generated from many writer-unknown handwritten samples and printed samples of each character, are introduced to represent the content information of each character. The difference vectors of the character samples are gotten by subtracting the standard templates from the original feature vectors and used to replace the original vectors in the process of writer verification. By removing a large amount of content information and remaining the style information, the verification accuracy of the semi-text-independent method is improved. On a handwriting database involving 30 writers, when the query handwriting and the reference handwriting are composed of 30 distinct characters respectively, the average equal error rate (EER) of writer verification reaches 9.96%. And when the handwritings contain 50 characters, the average EER falls to 6.34%, which is 23.9% lower than the EER of not using the difference vectors.

  16. Categorization influences illusory conjunctions.

    PubMed

    Esterman, Michael; Prinzmetal, William; Robertson, Lynn

    2004-08-01

    Illusory conjunctions (ICs) provide evidence for a binding problem that must be resolved in vision. Objects that are perceptually grouped are more likely to have their features erroneously conjoined. We examined whether semantic grouping, determined by category membership (letter vs. number), also influences illusory conjunction rates. Participants were instructed to detect an "L" or a "7" among briefly presented character strings and to report its color. Despite high shape discrimination accuracy, participants often made color conjunction errors, reporting instead the color of a distractor character, "O". This distractor could be ambiguously interpreted as a letter or a number. The status of the "O" was determined by other noncolored flanker characters, which were either letters or numbers. When both the target and flankers were of the same category, participants made more ICs than when the target and flankers were of different categories. This finding demonstrates that alphanumeric categorization can precede and subsequently influence binding.

  17. Lexical morphology and its role in the writing process: evidence from a case of acquired dysgraphia.

    PubMed

    Badecker, W; Hillis, A; Caramazza, A

    1990-06-01

    A case of acquired dysgraphia is presented in which the deficit is attributed to an impairment at the level of the Graphemic Output Buffer. It is argued that this patient's performance can be used to identify the representational character of the processing units that are stored in the Orthographic Output Lexicon. In particular, it is argued that the distribution of spelling errors and the types of lexical items which affect error rates indicate that the lexical representations passed from the lexical output system to the Graphemic Output Buffer correspond to the productive morphemes of the language.

  18. Experimental confirmation of a character-facing bias in literacy development.

    PubMed

    McIntosh, Robert D; Anderson, Eilidh L; Henderson, Rowena M

    2018-06-01

    When learning to write, children often mirror-reverse individual letters. For children learning to use the Latin alphabet, in a left-to-right writing culture, letters that appear to face left (such as J and Z) seem to be more prone to reversal than those that appear to face right (such as B and C). It has been proposed that, because most asymmetrical Latin letters face right, children statistically learn this general regularity and are subsequently biased to write any letter rightward. The evidence for this character-facing bias is circumstantial, however, because letter-facing direction is confounded with other factors that could affect error rates; for instance, J and Z are left-facing, but they are also infrequent. We report the first controlled experimental test of the character-facing bias. We taught 43 Scottish primary schoolchildren (aged 4.8-5.8 years) four artificial, letter-like characters, two of which were left-facing and two of which were right-facing. The characters were novel and so were not subject to prior exposure effects, and alternate groups of children were assigned to identical but mirror-reflected character sets. Children were three times more likely to mirror-write a novel character they had learned in a left-facing format than to mirror-write one they had learned in a right-facing format. This provides the first experimental confirmation of the character-facing bias in literacy development and suggests that implicit knowledge acquired from exposure to written language is readily generalized to novel letter-like forms. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Noisy text categorization.

    PubMed

    Vinciarelli, Alessandro

    2005-12-01

    This work presents categorization experiments performed over noisy texts. By noisy, we mean any text obtained through an extraction process (affected by errors) from media other than digital texts (e.g., transcriptions of speech recordings extracted with a recognition system). The performance of a categorization system over the clean and noisy (Word Error Rate between approximately 10 and approximately 50 percent) versions of the same documents is compared. The noisy texts are obtained through handwriting recognition and simulation of optical character recognition. The results show that the performance loss is acceptable for Recall values up to 60-70 percent depending on the noise sources. New measures of the extraction process performance, allowing a better explanation of the categorization results, are proposed.

  20. Deterministic error correction for nonlocal spatial-polarization hyperentanglement

    PubMed Central

    Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu

    2016-01-01

    Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication. PMID:26861681

  1. Deterministic error correction for nonlocal spatial-polarization hyperentanglement.

    PubMed

    Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu

    2016-02-10

    Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication.

  2. Evaluating true BCI communication rate through mutual information and language models.

    PubMed

    Speier, William; Arnold, Corey; Pouratian, Nader

    2013-01-01

    Brain-computer interface (BCI) systems are a promising means for restoring communication to patients suffering from "locked-in" syndrome. Research to improve system performance primarily focuses on means to overcome the low signal to noise ratio of electroencephalogric (EEG) recordings. However, the literature and methods are difficult to compare due to the array of evaluation metrics and assumptions underlying them, including that: 1) all characters are equally probable, 2) character selection is memoryless, and 3) errors occur completely at random. The standardization of evaluation metrics that more accurately reflect the amount of information contained in BCI language output is critical to make progress. We present a mutual information-based metric that incorporates prior information and a model of systematic errors. The parameters of a system used in one study were re-optimized, showing that the metric used in optimization significantly affects the parameter values chosen and the resulting system performance. The results of 11 BCI communication studies were then evaluated using different metrics, including those previously used in BCI literature and the newly advocated metric. Six studies' results varied based on the metric used for evaluation and the proposed metric produced results that differed from those originally published in two of the studies. Standardizing metrics to accurately reflect the rate of information transmission is critical to properly evaluate and compare BCI communication systems and advance the field in an unbiased manner.

  3. Interaction of hypertension and age in visual selective attention performance.

    PubMed

    Madden, D J; Blumenthal, J A

    1998-01-01

    Previous research suggests that some aspects of cognitive performance decline as a joint function of age and hypertension. In this experiment, 51 unmedicated individuals with mild essential hypertension and 48 normotensive individuals, 18-78 years of age, performed a visual search task. The estimated time required to identify a display character and shift attention between display positions increased with age. This attention shift time did not differ significantly between hypertensive and normotensive participants, but regression analyses indicated some mediation of the age effect by blood pressure. For individuals less than 60 years of age, the error rate was greater for hypertensive than for normotensive participants. Although the present design could detect effects of only moderate to large size, the results suggest that effects of hypertension may be more evident in a relatively general measure of performance (mean error rate) than in the speed of shifting visual attention.

  4. A QUICK KEY TO THE SUBFAMILIES AND GENERA OF ANTS OF THE SAVANNAH RIVER SITE, AIKEN, SC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, D

    2006-10-04

    This taxonomic key was devised to support development of a Rapid Bioassessment Protocol using ants at the Savannah River Site. The emphasis is on ''rapid'' and, because the available keys contained a large number of genera not known to occur at the Savannah River Site, we found that the available keys were unwieldy. Because these keys contained more genera than we would likely encounter and because this larger number of genera required both more couplets in the key and often required examination of characters that are difficult to assess without higher magnifications (60X or higher) more time was required tomore » process samples. In developing this set of keys I recognize that the character sets used may lead to some errors but I believe that the error rate will be small and, for the purpose of rapid bioassessment, this error rate will be acceptable provided that overall sample sizes are adequate. Oliver and Beattie (1996a, 1996b) found that for rapid assessment of biodiversity the same results were found when identifications were done to morphospecies by people with minimal expertise as when the same data sets were identified by subject matter experts. Basset et al. (2004) concluded that it was not as important to correctly identify all species as it was to be sure that the study included as many functional groups as possible. If your study requires high levels of accuracy, it is highly recommended that when you key out a specimen and have any doubts concerning the identification, you should refer to keys in Bolton (1994) or to the other keys used to develop this area specific taxonomic key.« less

  5. Intelligent OCR Processing.

    ERIC Educational Resources Information Center

    Sun, Wei; And Others

    1992-01-01

    Identifies types and distributions of errors in text produced by optical character recognition (OCR) and proposes a process using machine learning techniques to recognize and correct errors in OCR texts. Results of experiments indicating that this strategy can reduce human interaction required for error correction are reported. (25 references)…

  6. Error Model and Compensation of Bell-Shaped Vibratory Gyro

    PubMed Central

    Su, Zhong; Liu, Ning; Li, Qing

    2015-01-01

    A bell-shaped vibratory angular velocity gyro (BVG), inspired by the Chinese traditional bell, is a type of axisymmetric shell resonator gyroscope. This paper focuses on development of an error model and compensation of the BVG. A dynamic equation is firstly established, based on a study of the BVG working mechanism. This equation is then used to evaluate the relationship between the angular rate output signal and bell-shaped resonator character, analyze the influence of the main error sources and set up an error model for the BVG. The error sources are classified from the error propagation characteristics, and the compensation method is presented based on the error model. Finally, using the error model and compensation method, the BVG is calibrated experimentally including rough compensation, temperature and bias compensation, scale factor compensation and noise filter. The experimentally obtained bias instability is from 20.5°/h to 4.7°/h, the random walk is from 2.8°/h1/2 to 0.7°/h1/2 and the nonlinearity is from 0.2% to 0.03%. Based on the error compensation, it is shown that there is a good linear relationship between the sensing signal and the angular velocity, suggesting that the BVG is a good candidate for the field of low and medium rotational speed measurement. PMID:26393593

  7. Scene Text Recognition using Similarity and a Lexicon with Sparse Belief Propagation

    PubMed Central

    Weinman, Jerod J.; Learned-Miller, Erik; Hanson, Allen R.

    2010-01-01

    Scene text recognition (STR) is the recognition of text anywhere in the environment, such as signs and store fronts. Relative to document recognition, it is challenging because of font variability, minimal language context, and uncontrolled conditions. Much information available to solve this problem is frequently ignored or used sequentially. Similarity between character images is often overlooked as useful information. Because of language priors, a recognizer may assign different labels to identical characters. Directly comparing characters to each other, rather than only a model, helps ensure that similar instances receive the same label. Lexicons improve recognition accuracy but are used post hoc. We introduce a probabilistic model for STR that integrates similarity, language properties, and lexical decision. Inference is accelerated with sparse belief propagation, a bottom-up method for shortening messages by reducing the dependency between weakly supported hypotheses. By fusing information sources in one model, we eliminate unrecoverable errors that result from sequential processing, improving accuracy. In experimental results recognizing text from images of signs in outdoor scenes, incorporating similarity reduces character recognition error by 19%, the lexicon reduces word recognition error by 35%, and sparse belief propagation reduces the lexicon words considered by 99.9% with a 12X speedup and no loss in accuracy. PMID:19696446

  8. Embodying Others in Immersive Virtual Reality: Electro-Cortical Signatures of Monitoring the Errors in the Actions of an Avatar Seen from a First-Person Perspective.

    PubMed

    Pavone, Enea Francesco; Tieri, Gaetano; Rizza, Giulia; Tidoni, Emmanuele; Grisoni, Luigi; Aglioti, Salvatore Maria

    2016-01-13

    Brain monitoring of errors in one's own and other's actions is crucial for a variety of processes, ranging from the fine-tuning of motor skill learning to important social functions, such as reading out and anticipating the intentions of others. Here, we combined immersive virtual reality and EEG recording to explore whether embodying the errors of an avatar by seeing it from a first-person perspective may activate the error monitoring system in the brain of an onlooker. We asked healthy participants to observe, from a first- or third-person perspective, an avatar performing a correct or an incorrect reach-to-grasp movement toward one of two virtual mugs placed on a table. At the end of each trial, participants reported verbally how much they embodied the avatar's arm. Ratings were maximal in first-person perspective, indicating that immersive virtual reality can be a powerful tool to induce embodiment of an artificial agent, even through mere visual perception and in the absence of any cross-modal boosting. Observation of erroneous grasping from a first-person perspective enhanced error-related negativity and medial-frontal theta power in the trials where human onlookers embodied the virtual character, hinting at the tight link between early, automatic coding of error detection and sense of embodiment. Error positivity was similar in 1PP and 3PP, suggesting that conscious coding of errors is similar for self and other. Thus, embodiment plays an important role in activating specific components of the action monitoring system when others' errors are coded as if they are one's own errors. Detecting errors in other's actions is crucial for social functions, such as reading out and anticipating the intentions of others. Using immersive virtual reality and EEG recording, we explored how the brain of an onlooker reacted to the errors of an avatar seen from a first-person perspective. We found that mere observation of erroneous actions enhances electrocortical markers of error detection in the trials where human onlookers embodied the virtual character. Thus, the cerebral system for action monitoring is maximally activated when others' errors are coded as if they are one's own errors. The results have important implications for understanding how the brain can control the external world and thus creating new brain-computer interfaces. Copyright © 2016 the authors 0270-6474/16/360268-12$15.00/0.

  9. Trace-shortened Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Solomon, G.

    1994-01-01

    Reed-Solomon (RS) codes have been part of standard NASA telecommunications systems for many years. RS codes are character-oriented error-correcting codes, and their principal use in space applications has been as outer codes in concatenated coding systems. However, for a given character size, say m bits, RS codes are limited to a length of, at most, 2(exp m). It is known in theory that longer character-oriented codes would be superior to RS codes in concatenation applications, but until recently no practical class of 'long' character-oriented codes had been discovered. In 1992, however, Solomon discovered an extensive class of such codes, which are now called trace-shortened Reed-Solomon (TSRS) codes. In this article, we will continue the study of TSRS codes. Our main result is a formula for the dimension of any TSRS code, as a function of its error-correcting power. Using this formula, we will give several examples of TSRS codes, some of which look very promising as candidate outer codes in high-performance coded telecommunications systems.

  10. Alexia and agraphia with lesions of the angular and supramarginal gyri: evidence for the disruption of sequential processing.

    PubMed

    Sakurai, Yasuhisa; Asami, Masahiko; Mannen, Toru

    2010-01-15

    To determine the features of alexia or agraphia with a left angular or supramarginal gyrus lesion. We assessed the reading and writing abilities of three patients using kanji (Japanese morphograms) and kana (Japanese syllabograms). Patient 1 showed kana alexia and kanji agraphia following a hemorrhage in the left angular gyrus and the adjacent lateral occipital gyri. Patient 2 presented with minimal pure agraphia for both kanji and kana after an infarction in the left angular gyrus involving part of the supramarginal gyrus. Patient 3 also showed moderate pure agraphia for both kanji and kana after an infarction in the left supramarginal and postcentral gyri. All three patients made transposition errors (changing of sequential order of kana characters) in reading. Patient 1 showed letter-by-letter reading and a word-length effect and made substitution errors (changing hiragana [one form of kana] characters in a word to katakana [another form of kana] characters and vice versa) in writing. Alexia occurs as "angular" alexia only when the lesion involves the adjacent lateral occipital gyri. Transposition errors suggest disrupted sequential phonological processing from the angular and lateral occipital gyri to the supramarginal gyrus. Substitution errors suggest impaired allographic conversion between hiragana and katakana attributable to a dysfunction in the angular/lateral occipital gyri.

  11. Reducing errors from the electronic transcription of data collected on paper forms: a research data case study.

    PubMed

    Wahi, Monika M; Parks, David V; Skeate, Robert C; Goldin, Steven B

    2008-01-01

    We conducted a reliability study comparing single data entry (SE) into a Microsoft Excel spreadsheet to entry using the existing forms (EF) feature of the Teleforms software system, in which optical character recognition is used to capture data off of paper forms designed in non-Teleforms software programs. We compared the transcription of data from multiple paper forms from over 100 research participants representing almost 20,000 data entry fields. Error rates for SE were significantly lower than those for EF, so we chose SE for data entry in our study. Data transcription strategies from paper to electronic format should be chosen based on evidence from formal evaluations, and their design should be contemplated during the paper forms development stage.

  12. Reducing Errors from the Electronic Transcription of Data Collected on Paper Forms: A Research Data Case Study

    PubMed Central

    Wahi, Monika M.; Parks, David V.; Skeate, Robert C.; Goldin, Steven B.

    2008-01-01

    We conducted a reliability study comparing single data entry (SE) into a Microsoft Excel spreadsheet to entry using the existing forms (EF) feature of the Teleforms software system, in which optical character recognition is used to capture data off of paper forms designed in non-Teleforms software programs. We compared the transcription of data from multiple paper forms from over 100 research participants representing almost 20,000 data entry fields. Error rates for SE were significantly lower than those for EF, so we chose SE for data entry in our study. Data transcription strategies from paper to electronic format should be chosen based on evidence from formal evaluations, and their design should be contemplated during the paper forms development stage. PMID:18308994

  13. Invention and validation of an automated camera system that uses optical character recognition to identify patient name mislabeled samples.

    PubMed

    Hawker, Charles D; McCarthy, William; Cleveland, David; Messinger, Bonnie L

    2014-03-01

    Mislabeled samples are a serious problem in most clinical laboratories. Published error rates range from 0.39/1000 to as high as 1.12%. Standardization of bar codes and label formats has not yet achieved the needed improvement. The mislabel rate in our laboratory, although low compared with published rates, prompted us to seek a solution to achieve zero errors. To reduce or eliminate our mislabeled samples, we invented an automated device using 4 cameras to photograph the outside of a sample tube. The system uses optical character recognition (OCR) to look for discrepancies between the patient name in our laboratory information system (LIS) vs the patient name on the customer label. All discrepancies detected by the system's software then require human inspection. The system was installed on our automated track and validated with production samples. We obtained 1 009 830 images during the validation period, and every image was reviewed. OCR passed approximately 75% of the samples, and no mislabeled samples were passed. The 25% failed by the system included 121 samples actually mislabeled by patient name and 148 samples with spelling discrepancies between the patient name on the customer label and the patient name in our LIS. Only 71 of the 121 mislabeled samples detected by OCR were found through our normal quality assurance process. We have invented an automated camera system that uses OCR technology to identify potential mislabeled samples. We have validated this system using samples transported on our automated track. Full implementation of this technology offers the possibility of zero mislabeled samples in the preanalytic stage.

  14. Modelling rate distributions using character compatibility: implications for morphological evolution among fossil invertebrates.

    PubMed

    Wagner, Peter J

    2012-02-23

    Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution.

  15. Teaching Identity Matching of Braille Characters to Beginning Braille Readers

    ERIC Educational Resources Information Center

    Toussaint, Karen A.; Scheithauer, Mindy C.; Tiger, Jeffrey H.; Saunders, Kathryn J.

    2017-01-01

    We taught three children with visual impairments to make tactile discriminations of the braille alphabet within a matching-to-sample format. That is, we presented participants with a braille character as a sample stimulus, and they selected the matching stimulus from a three-comparison array. In order to minimize participant errors, we initially…

  16. Modelling rate distributions using character compatibility: implications for morphological evolution among fossil invertebrates

    PubMed Central

    Wagner, Peter J.

    2012-01-01

    Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution. PMID:21795266

  17. Delimiting species of Protaphorura (Collembola: Onychiuridae): integrative evidence based on morphology, DNA sequences and geography.

    PubMed

    Sun, Xin; Zhang, Feng; Ding, Yinhuan; Davies, Thomas W; Li, Yu; Wu, Donghui

    2017-08-15

    Species delimitation remains a significant challenge when the diagnostic morphological characters are limited. Integrative taxonomy was applied to the genus Protaphorura (Collembola: Onychiuridae), which is one of most difficult soil animals to distinguish taxonomically. Three delimitation approaches (morphology, molecular markers and geography) were applied providing rigorous species validation criteria with an acceptably low error rate. Multiple molecular approaches, including distance- and evolutionary model-based methods, were used to determine species boundaries based on 144 standard barcode sequences. Twenty-two molecular putative species were consistently recovered across molecular and geographical analyses. Geographic criteria were was proved to be an efficient delimitation method for onychiurids. Further morphological examination, based on the combination of the number of pseudocelli, parapseudocelli and ventral mesothoracic chaetae, confirmed 18 taxa of 22 molecular units, with six of them described as new species. These characters were found to be of high taxonomical value. This study highlights the potential benefits of integrative taxonomy, particularly simultaneous use of molecular/geographical tools, as a powerful way of ascertaining the true diversity of the Onychiuridae. Our study also highlights that discovering new morphological characters remains central to achieving a full understanding of collembolan taxonomy.

  18. Application of grammar-based codes for lossless compression of digital mammograms

    NASA Astrophysics Data System (ADS)

    Li, Xiaoli; Krishnan, Srithar; Ma, Ngok-Wah

    2006-01-01

    A newly developed grammar-based lossless source coding theory and its implementation was proposed in 1999 and 2000, respectively, by Yang and Kieffer. The code first transforms the original data sequence into an irreducible context-free grammar, which is then compressed using arithmetic coding. In the study of grammar-based coding for mammography applications, we encountered two issues: processing time and limited number of single-character grammar G variables. For the first issue, we discover a feature that can simplify the matching subsequence search in the irreducible grammar transform process. Using this discovery, an extended grammar code technique is proposed and the processing time of the grammar code can be significantly reduced. For the second issue, we propose to use double-character symbols to increase the number of grammar variables. Under the condition that all the G variables have the same probability of being used, our analysis shows that the double- and single-character approaches have the same compression rates. By using the methods proposed, we show that the grammar code can outperform three other schemes: Lempel-Ziv-Welch (LZW), arithmetic, and Huffman on compression ratio, and has similar error tolerance capabilities as LZW coding under similar circumstances.

  19. Effectiveness of feature and classifier algorithms in character recognition systems

    NASA Astrophysics Data System (ADS)

    Wilson, Charles L.

    1993-04-01

    At the first Census Optical Character Recognition Systems Conference, NIST generated accuracy data for more than character recognition systems. Most systems were tested on the recognition of isolated digits and upper and lower case alphabetic characters. The recognition experiments were performed on sample sizes of 58,000 digits, and 12,000 upper and lower case alphabetic characters. The algorithms used by the 26 conference participants included rule-based methods, image-based methods, statistical methods, and neural networks. The neural network methods included Multi-Layer Perceptron's, Learned Vector Quantitization, Neocognitrons, and cascaded neural networks. In this paper 11 different systems are compared using correlations between the answers of different systems, comparing the decrease in error rate as a function of confidence of recognition, and comparing the writer dependence of recognition. This comparison shows that methods that used different algorithms for feature extraction and recognition performed with very high levels of correlation. This is true for neural network systems, hybrid systems, and statistically based systems, and leads to the conclusion that neural networks have not yet demonstrated a clear superiority to more conventional statistical methods. Comparison of these results with the models of Vapnick (for estimation problems), MacKay (for Bayesian statistical models), Moody (for effective parameterization), and Boltzmann models (for information content) demonstrate that as the limits of training data variance are approached, all classifier systems have similar statistical properties. The limiting condition can only be approached for sufficiently rich feature sets because the accuracy limit is controlled by the available information content of the training set, which must pass through the feature extraction process prior to classification.

  20. [Character of refractive errors in population study performed by the Area Military Medical Commission in Lodz].

    PubMed

    Nowak, Michał S; Goś, Roman; Smigielski, Janusz

    2008-01-01

    To determine the prevalence of refractive errors in population. A retrospective review of medical examinations for entry to the military service from The Area Military Medical Commission in Lodz. Ophthalmic examinations were performed. We used statistic analysis to review the results. Statistic analysis revealed that refractive errors occurred in 21.68% of the population. The most commen refractive error was myopia. 1) The most commen ocular diseases are refractive errors, especially myopia (21.68% in total). 2) Refractive surgery and contact lenses should be allowed as the possible correction of refractive errors for military service.

  1. Optical character recognition: an illustrated guide to the frontier

    NASA Astrophysics Data System (ADS)

    Nagy, George; Nartker, Thomas A.; Rice, Stephen V.

    1999-12-01

    We offer a perspective on the performance of current OCR systems by illustrating and explaining actual OCR errors made by three commercial devices. After discussing briefly the character recognition abilities of humans and computers, we present illustrated examples of recognition errors. The top level of our taxonomy of the causes of errors consists of Imaging Defects, Similar Symbols, Punctuation, and Typography. The analysis of a series of 'snippets' from this perspective provides insight into the strengths and weaknesses of current systems, and perhaps a road map to future progress. The examples were drawn from the large-scale tests conducted by the authors at the Information Science Research Institute of the University of Nevada, Las Vegas. By way of conclusion, we point to possible approaches for improving the accuracy of today's systems. The talk is based on our eponymous monograph, recently published in The Kluwer International Series in Engineering and Computer Science, Kluwer Academic Publishers, 1999.

  2. Cortical regions involved in semantic processing investigated by repetitive navigated transcranial magnetic stimulation and object naming.

    PubMed

    Sollmann, Nico; Tanigawa, Noriko; Tussis, Lorena; Hauck, Theresa; Ille, Sebastian; Maurer, Stefanie; Negwer, Chiara; Zimmer, Claus; Ringel, Florian; Meyer, Bernhard; Krieg, Sandro M

    2015-04-01

    Knowledge about the cortical representation of semantic processing is mainly derived from functional magnetic resonance imaging (fMRI) or direct cortical stimulation (DCS) studies. Because DCS is regarded as the gold standard in terms of language mapping but can only be used during awake surgery due to its invasive character, repetitive navigated transcranial magnetic stimulation (rTMS)—a non-invasive modality that uses a similar technique as DCS—seems highly feasible for use in the investigation of semantic processing in the healthy human brain. A total number of 100 (50 left-hemispheric and 50 right-hemispheric) rTMS-based language mappings were performed in 50 purely right-handed, healthy volunteers during an object-naming task. All rTMS-induced semantic naming errors were then counted and evaluated systematically. Furthermore, since the distribution of stimulations within both hemispheres varied between individuals and cortical regions stimulated, all elicited errors were standardized and subsequently related to their cortical sites by projecting the mapping results into the cortical parcellation system (CPS). Overall, the most left-hemispheric semantic errors were observed after targeting the rTMS to the posterior middle frontal gyrus (pMFG; standardized error rate: 7.3‰), anterior supramarginal gyrus (aSMG; 5.6‰), and ventral postcentral gyrus (vPoG; 5.0‰). In contrast to that, the highest right-hemispheric error rates occurred after stimulation of the posterior superior temporal gyrus (pSTG; 12.4‰), middle superior temporal gyrus (mSTG; 6.2‰), and anterior supramarginal gyrus (aSMG; 6.2‰). Although error rates were low, the rTMS-based approach of investigating semantic processing during object naming shows convincing results compared to the current literature. Therefore, rTMS seems a valuable, safe, and reliable tool for the investigation of semantic processing within the healthy human brain. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Text-line extraction in handwritten Chinese documents based on an energy minimization framework.

    PubMed

    Koo, Hyung Il; Cho, Nam Ik

    2012-03-01

    Text-line extraction in unconstrained handwritten documents remains a challenging problem due to nonuniform character scale, spatially varying text orientation, and the interference between text lines. In order to address these problems, we propose a new cost function that considers the interactions between text lines and the curvilinearity of each text line. Precisely, we achieve this goal by introducing normalized measures for them, which are based on an estimated line spacing. We also present an optimization method that exploits the properties of our cost function. Experimental results on a database consisting of 853 handwritten Chinese document images have shown that our method achieves a detection rate of 99.52% and an error rate of 0.32%, which outperforms conventional methods.

  4. Interspecific aggression and character displacement of competitor recognition in Hetaerina damselflies.

    PubMed

    Anderson, Christopher N; Grether, Gregory F

    2010-02-22

    In zones of sympatry between closely related species, species recognition errors in a competitive context can cause character displacement in agonistic signals and competitor recognition functions, just as species recognition errors in a mating context can cause character displacement in mating signals and mate recognition. These two processes are difficult to distinguish because the same traits can serve as both agonistic and mating signals. One solution is to test for sympatric shifts in recognition functions. We studied competitor recognition in Hetaerina damselflies by challenging territory holders with live tethered conspecific and heterospecific intruders. Heterospecific intruders elicited less aggression than conspecific intruders in species pairs with dissimilar wing coloration (H. occisa/H. titia, H. americana/H. titia) but not in species pairs with similar wing coloration (H. occisa/H. cruentata, H. americana/H. cruentata). Natural variation in the area of black wing pigmentation on H. titia intruders correlated negatively with heterospecific aggression. To directly examine the role of wing coloration, we blackened the wings of H. occisa or H. americana intruders and measured responses of conspecific territory holders. This treatment reduced territorial aggression at multiple sites where H. titia is present, but not at allopatric sites. These results provide strong evidence for agonistic character displacement.

  5. Segmentation of touching handwritten Japanese characters using the graph theory method

    NASA Astrophysics Data System (ADS)

    Suwa, Misako

    2000-12-01

    Projection analysis methods have been widely used to segment Japanese character strings. However, if adjacent characters have overhanging strokes or a touching point doesn't correspond to the histogram minimum, the methods are prone to result in errors. In contrast, non-projection analysis methods being proposed for use on numerals or alphabet characters cannot be simply applied for Japanese characters because of the differences in the structure of the characters. Based on the oversegmenting strategy, a new pre-segmentation method is presented in this paper: touching patterns are represented as graphs and touching strokes are regarded as the elements of proper edge cutsets. By using the graph theoretical technique, the cutset martrix is calculated. Then, by applying pruning rules, potential touching strokes are determined and the patterns are over segmented. Moreover, this algorithm was confirmed to be valid for touching patterns with overhanging strokes and doubly connected patterns in simulations.

  6. DiscML: an R package for estimating evolutionary rates of discrete characters using maximum likelihood.

    PubMed

    Kim, Tane; Hao, Weilong

    2014-09-27

    The study of discrete characters is crucial for the understanding of evolutionary processes. Even though great advances have been made in the analysis of nucleotide sequences, computer programs for non-DNA discrete characters are often dedicated to specific analyses and lack flexibility. Discrete characters often have different transition rate matrices, variable rates among sites and sometimes contain unobservable states. To obtain the ability to accurately estimate a variety of discrete characters, programs with sophisticated methodologies and flexible settings are desired. DiscML performs maximum likelihood estimation for evolutionary rates of discrete characters on a provided phylogeny with the options that correct for unobservable data, rate variations, and unknown prior root probabilities from the empirical data. It gives users options to customize the instantaneous transition rate matrices, or to choose pre-determined matrices from models such as birth-and-death (BD), birth-death-and-innovation (BDI), equal rates (ER), symmetric (SYM), general time-reversible (GTR) and all rates different (ARD). Moreover, we show application examples of DiscML on gene family data and on intron presence/absence data. DiscML was developed as a unified R program for estimating evolutionary rates of discrete characters with no restriction on the number of character states, and with flexibility to use different transition models. DiscML is ideal for the analyses of binary (1s/0s) patterns, multi-gene families, and multistate discrete morphological characteristics.

  7. Effects of OCR Errors on Ranking and Feedback Using the Vector Space Model.

    ERIC Educational Resources Information Center

    Taghva, Kazem; And Others

    1996-01-01

    Reports on the performance of the vector space model in the presence of OCR (optical character recognition) errors in information retrieval. Highlights include precision and recall, a full-text test collection, smart vector representation, impact of weighting parameters, ranking variability, and the effect of relevance feedback. (Author/LRW)

  8. Verbal Short-Term Memory Deficits in Chinese Children with Dyslexia may not be a Problem with the Activation of Phonological Representations.

    PubMed

    Zhao, Jing; Yang, Yang; Song, Yao-Wu; Bi, Hong-Yan

    2015-11-01

    This study explored the underlying mechanism of the verbal short-term memory deficit in Chinese children with developmental dyslexia. Twenty-four children with dyslexia and 28 age-matched normal readers participated in the study. They were required to memorize a visually presented series of six Chinese characters and identify them from a list also including code-specific distracters and non-code-specific distracters. Error rates were recorded and were higher for code-specific distracters in all three conditions, revealing phonological, visual, and semantic similarity effects respectively. Group comparisons showed a stronger phonological similarity effect in dyslexic group, suggesting intact activation of phonological representations of target characters. Children with dyslexia also exhibited a greater semantic similarity effect, revealing stronger activation of semantic representations, while visual similarity effects were equivalent to controls. These results suggest that the verbal short-term memory deficit in Chinese dyslexics might not stem from insufficient activation of phonological information. Based the semantic activation of target characters in dyslexics is greater than in controls, it is possible that the memory deficit of dyslexia is related with deficient inhibition of target semantic representations in short-term memory. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Among-character rate variation distributions in phylogenetic analysis of discrete morphological characters.

    PubMed

    Harrison, Luke B; Larsson, Hans C E

    2015-03-01

    Likelihood-based methods are commonplace in phylogenetic systematics. Although much effort has been directed toward likelihood-based models for molecular data, comparatively less work has addressed models for discrete morphological character (DMC) data. Among-character rate variation (ACRV) may confound phylogenetic analysis, but there have been few analyses of the magnitude and distribution of rate heterogeneity among DMCs. Using 76 data sets covering a range of plants, invertebrate, and vertebrate animals, we used a modified version of MrBayes to test equal, gamma-distributed and lognormally distributed models of ACRV, integrating across phylogenetic uncertainty using Bayesian model selection. We found that in approximately 80% of data sets, unequal-rates models outperformed equal-rates models, especially among larger data sets. Moreover, although most data sets were equivocal, more data sets favored the lognormal rate distribution relative to the gamma rate distribution, lending some support for more complex character correlations than in molecular data. Parsimony estimation of the underlying rate distributions in several data sets suggests that the lognormal distribution is preferred when there are many slowly evolving characters and fewer quickly evolving characters. The commonly adopted four rate category discrete approximation used for molecular data was found to be sufficient to approximate a gamma rate distribution with discrete characters. However, among the two data sets tested that favored a lognormal rate distribution, the continuous distribution was better approximated with at least eight discrete rate categories. Although the effect of rate model on the estimation of topology was difficult to assess across all data sets, it appeared relatively minor between the unequal-rates models for the one data set examined carefully. As in molecular analyses, we argue that researchers should test and adopt the most appropriate model of rate variation for the data set in question. As discrete characters are increasingly used in more sophisticated likelihood-based phylogenetic analyses, it is important that these studies be built on the most appropriate and carefully selected underlying models of evolution. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Violent film characters' portrayal of alcohol, sex, and tobacco-related behaviors.

    PubMed

    Bleakley, Amy; Romer, Daniel; Jamieson, Patrick E

    2014-01-01

    To determine the extent to which movies popular with adolescents feature characters who jointly engage in violence and other risk behaviors. We hypothesized that violent characters engage in other risk behaviors equally often in films rated appropriate for children over 12 (PG-13) and Restricted (R)-rated films. Content analysis of a sample of top-grossing movies from 1985 to 2010 (n = 390). We coded movies for the presence of at least 1 main character who was involved in violence and either sex, tobacco, or alcohol use within a 5-minute movie segment and throughout a film. Approximately 90% of the movies contained a segment with a main character involved in violence, and ~77% of the films had the same character engaging in at least 1 other risk behavior. A violent character was portrayed most often partaking in alcohol-related and sexual behaviors. G and PG movies had less co-occurrence than PG-13 or R-rated movies, but there was no statistical difference between PG-13 and R-rated movies with regards to violence co-occurring with other risk behaviors. These trends did not vary over time. Popular films that contain violent characters also show those characters engaging in other risk behaviors. Similar rates of co-occurrence between PG-13 and R-rated films suggest that the Motion Picture Association of America ratings system is not sensitive to the joint portrayal of violence and alcohol, sex, and tobacco-related risk behaviors. The on-screen clustering of violence with other risk behaviors is cause for concern and worthy of additional research.

  11. Effects of flicker rate, complexity, and color combinations of Chinese characters and backgrounds on visual search performance with varying flicker types.

    PubMed

    Huang, Kuo-Chen; Lin, Rung-Tai; Wu, Chih-Fu

    2011-08-01

    This study investigated the effects of number of strokes in Chinese characters, flicker rate, flicker type, and character/background color combination on search performance. 37 participants ages 14 to 18 years were randomly assigned to each flicker-type condition. The search field contained 36 characters arranged in a 6 x 6 matrix. Participants were asked to search for the target characters among the surrounding distractors and count how many target characters were displayed in the search array. Analysis indicated that the character/background color combination significantly affected search times. The color combinations of white/purple and white/green yielded search times greater than those for black/white and black/yellow combinations. A significant effect for flicker type on search time was also identified. Rotating characters facilitated search time, compared with twinkling ones. The number of strokes and the flicker rates also had positive effects on search performances. For flicker rate, the search accuracy for 0.5 Hz was greater than that for 1.0 Hz, and the latter was also greater than that for 2.0 Hz. Results are applicable to web advertisement designs containing dynamic characters, in terms of how to best capture readers' attention by various means of dynamic character presentation.

  12. Construction of language models for an handwritten mail reading system

    NASA Astrophysics Data System (ADS)

    Morillot, Olivier; Likforman-Sulem, Laurence; Grosicki, Emmanuèle

    2012-01-01

    This paper presents a system for the recognition of unconstrained handwritten mails. The main part of this system is an HMM recognizer which uses trigraphs to model contextual information. This recognition system does not require any segmentation into words or characters and directly works at line level. To take into account linguistic information and enhance performance, a language model is introduced. This language model is based on bigrams and built from training document transcriptions only. Different experiments with various vocabulary sizes and language models have been conducted. Word Error Rate and Perplexity values are compared to show the interest of specific language models, fit to handwritten mail recognition task.

  13. The ideal elf: identity exploration in World of Warcraft.

    PubMed

    Bessière, Katherine; Seay, A Fleming; Kiesler, Sara

    2007-08-01

    In this study, we examine the identity exploration possibilities presented by online multiplayer games in which players use graphics tools and character-creation software to construct an avatar, or character. We predicted World of Warcraft players would create their main character more similar to their ideal self than the players themselves were. Our results support this idea; a sample of players rated their character as having more favorable attributes that were more favorable than their own self-rated attributes. This trend was stronger among those with lower psychological well-being, who rated themselves comparatively lower than they rated their character. Our results suggest that the game world allows players the freedom to create successful virtual selves regardless of the constraints of their actual situation.

  14. The Investigation of the Fundamental Limits of Heterodyne Holographic Interferometry with the Application of Imaging Laser Generated Lamb Waves

    DTIC Science & Technology

    1989-04-01

    character*25 msg,echol,echo2,msgl character* 12 fname,vel,stepl,step2 character dvm(15),decl,dec2 integer numl,num2,row,col, icheck ,fig real data C fig...flg) linex = ’ send step2 error’ if (flg.ne.0) goto 8000 write (*,610) step2,echol c c 200 icheck = 0 c c ENTER FILE NAME c write (*,’(A/)’)’ Specify...dvm, data) write (*,660) i, icheck write (*,600) data write (3,640) data c c Increment Horizontal Position c msg - ’I1"’ call send855 (msg,echo 1,flg

  15. National Character Does Not Reflect Mean Personality Trait Levels in 49 Cultures

    PubMed Central

    Abdel-Khalek, A. M.; Ádám, N.; Adamovová, L.; Ahn, C.-k.; Ahn, H.-n.; Alansari, B. M.; Alcalay, L.; Allik, J.; Angleitner, A.; Avia, A.; Ayearst, L. E.; Barbaranelli, C.; Beer, A.; Borg-Cunen, M. A.; Bratko, D.; Brunner-Sciarra, M.; Budzinski, L.; Camart, N.; Dahourou, D.; De Fruyt, F.; de Lima, M. P.; del Pilar, G. E. H.; Diener, E.; Falzon, R.; Fernando, K.; Ficková, E.; Fischer, R.; Flores-Mendoza, C.; Ghayur, M. A.; Gülgöz, S.; Hagberg, B.; Halberstadt, J.; Halim, M. S.; Hřebíčková, M.; Humrichouse, J.; Jensen, H. H.; Jocic, D. D.; Jónsson, F. H.; Khoury, B.; Klinkosz, W.; Knežević, G.; Lauri, M. A.; Leibovich, N.; Martin, T. A.; Marušić, I.; Mastor, K. A.; Matsumoto, D.; McRorie, M.; Meshcheriakov, B.; Mortensen, E. L.; Munyae, M.; Nagy, J.; Nakazato, K.; Nansubuga, F.; Oishi, S.; Ojedokun, A. O.; Ostendorf, F.; Paulhus, D. L.; Pelevin, S.; Petot, J.-M.; Podobnik, N.; Porrata, J. L.; Pramila, V. S.; Prentice, G.; Realo, A.; Reátegui, N.; Rolland, J.-P.; Rossier, J.; Ruch, W.; Rus, V. S.; Sánchez-Bernardos, M. L.; Schmidt, V.; Sciculna-Calleja, S.; Sekowski, A.; Shakespeare-Finch, J.; Shimonaka, Y.; Simonetti, F.; Sineshaw, T.; Siuta, J.; Smith, P. B.; Trapnell, P. D.; Trobst, K. K.; Wang, L.; Yik, M.; Zupančič, A.

    2009-01-01

    Most people hold beliefs about personality characteristics typical of members of their own and others' cultures. These perceptions of national character may be generalizations from personal experience, stereotypes with a “kernel of truth,” or inaccurate stereotypes. We obtained national character ratings (N = 3,989) from 49 cultures and compared them to the average personality scores of culture members assessed by observer ratings and self-reports. National character ratings were reliable, but did not converge with assessed traits (Mdn r = .04). Perceptions of national character thus appear to be unfounded stereotypes that may serve the function of maintaining a national identity. PMID:16210536

  16. A method for inferring the rate of evolution of homologous characters that can potentially improve phylogenetic inference, resolve deep divergence and correct systematic biases.

    PubMed

    Cummins, Carla A; McInerney, James O

    2011-12-01

    Current phylogenetic methods attempt to account for evolutionary rate variation across characters in a matrix. This is generally achieved by the use of sophisticated evolutionary models, combined with dense sampling of large numbers of characters. However, systematic biases and superimposed substitutions make this task very difficult. Model adequacy can sometimes be achieved at the cost of adding large numbers of free parameters, with each parameter being optimized according to some criterion, resulting in increased computation times and large variances in the model estimates. In this study, we develop a simple approach that estimates the relative evolutionary rate of each homologous character. The method that we describe uses the similarity between characters as a proxy for evolutionary rate. In this article, we work on the premise that if the character-state distribution of a homologous character is similar to many other characters, then this character is likely to be relatively slowly evolving. If the character-state distribution of a homologous character is not similar to many or any of the rest of the characters in a data set, then it is likely to be the result of rapid evolution. We show that in some test cases, at least, the premise can hold and the inferences are robust. Importantly, the method does not use a "starting tree" to make the inference and therefore is tree independent. We demonstrate that this approach can work as well as a maximum likelihood (ML) approach, though the ML method needs to have a known phylogeny, or at least a very good estimate of that phylogeny. We then demonstrate some uses for this method of analysis, including the improvement in phylogeny reconstruction for both deep-level and recent relationships and overcoming systematic biases such as base composition bias. Furthermore, we compare this approach to two well-established methods for reweighting or removing characters. These other methods are tree-based and we show that they can be systematically biased. We feel this method can be useful for phylogeny reconstruction, understanding evolutionary rate variation, and for understanding selection variation on different characters.

  17. Combining multiple thresholding binarization values to improve OCR output

    NASA Astrophysics Data System (ADS)

    Lund, William B.; Kennard, Douglas J.; Ringger, Eric K.

    2013-01-01

    For noisy, historical documents, a high optical character recognition (OCR) word error rate (WER) can render the OCR text unusable. Since image binarization is often the method used to identify foreground pixels, a body of research seeks to improve image-wide binarization directly. Instead of relying on any one imperfect binarization technique, our method incorporates information from multiple simple thresholding binarizations of the same image to improve text output. Using a new corpus of 19th century newspaper grayscale images for which the text transcription is known, we observe WERs of 13.8% and higher using current binarization techniques and a state-of-the-art OCR engine. Our novel approach combines the OCR outputs from multiple thresholded images by aligning the text output and producing a lattice of word alternatives from which a lattice word error rate (LWER) is calculated. Our results show a LWER of 7.6% when aligning two threshold images and a LWER of 6.8% when aligning five. From the word lattice we commit to one hypothesis by applying the methods of Lund et al. (2011) achieving an improvement over the original OCR output and a 8.41% WER result on this data set.

  18. Secular variation of activity in comets 2P/Encke and 9P/Tempel 1

    NASA Technical Reports Server (NTRS)

    Haken, Michael; AHearn, Michael F.; Feldman, Paul D.; Budzien, Scott A.

    1995-01-01

    We compare production rates of H20 derived from International Ultraviolet Explorer (IUE) spectra from multiple apparitions of 2 comets, 2P/Encke and 9P/Tempel 1, whose orbits are in near-resonance with that of the Earth. Since model-induced errors are primarily a function of observing geometry, the close geometrical matches afforded by the resonance condition results in the cancellation of such errors when taking ratios of production rates. Giving careful attention to the variation of model parameters with solar activity, we find marginal evidence of change in 2P/Encke: a 1-sigma pre-perihelion decrease averaging 4%/revolution over 4 apparitions from 1980-1994, and a 1-sigma post-perihelion increase of 16%/revolution for 2 successive apparitions in 1984 and 1987. We find for 9P/Tempel 1, however, a 7-sigma decrease of 29%/revolution over 3 apparitions from 1983-1994, even after correcting for a tracking problem which made the fluxes systematically low. We speculate on a possible association of the character of long-term brightness variations with physical properties of the nucleus, and discuss implications for future research.

  19. How well does multiple OCR error correction generalize?

    NASA Astrophysics Data System (ADS)

    Lund, William B.; Ringger, Eric K.; Walker, Daniel D.

    2013-12-01

    As the digitization of historical documents, such as newspapers, becomes more common, the need of the archive patron for accurate digital text from those documents increases. Building on our earlier work, the contributions of this paper are: 1. in demonstrating the applicability of novel methods for correcting optical character recognition (OCR) on disparate data sets, including a new synthetic training set, 2. enhancing the correction algorithm with novel features, and 3. assessing the data requirements of the correction learning method. First, we correct errors using conditional random fields (CRF) trained on synthetic training data sets in order to demonstrate the applicability of the methodology to unrelated test sets. Second, we show the strength of lexical features from the training sets on two unrelated test sets, yielding a relative reduction in word error rate on the test sets of 6.52%. New features capture the recurrence of hypothesis tokens and yield an additional relative reduction in WER of 2.30%. Further, we show that only 2.0% of the full training corpus of over 500,000 feature cases is needed to achieve correction results comparable to those using the entire training corpus, effectively reducing both the complexity of the training process and the learned correction model.

  20. The effects of social anxiety on interpersonal evaluations of warmth and dominance.

    PubMed

    Rodebaugh, Thomas L; Bielak, Tatiana; Vidovic, Vanja; Moscovitch, David A

    2016-03-01

    Social anxiety disorder is associated with interpersonal dysfunction, but it is not clear why people with the disorder feel unsatisfied with their relationships. One possibility is that higher social anxiety could lead to changes in sensitivity to interpersonal traits. We examined whether social anxiety moderates the types of interpersonal evaluations people make regarding warmth and dominance. We developed vignettes in which central characters systematically varied in dominance and warmth and asked two samples of participants (undergraduate students, n=176, and online workers, n=403) to rate their willingness to interact with, and the social desirability of, these characters. Participants in general reported stronger desire to interact with warmer and less dominant characters, and rated warmer and more dominant characters as being more socially desirable. People with higher social anxiety exhibited greater tolerance for colder and more submissive characters on both rated dimensions. The perceived similarity of the characters accounted for the bulk of these effects. Participants indicated a higher desire to interact with characters more similar to themselves, and people with higher social anxiety were more likely to rate submissive and cold characters as being like themselves. The results have implications for clinical interventions for social anxiety disorder. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Prevalence of smoking among major movie characters: 1996–2004

    PubMed Central

    Worth, Keilah A; Cin, Sonya Dal; Sargent, James D

    2006-01-01

    Background Reports of a relationship between watching smoking in movies and smoking among adolescents have prompted greater scrutiny of smoking in movies by the public health community. Objective To assess the smoking prevalence among adult and adolescent movie characters, examine trends in smoking in movies over time, and compare the data with actual smoking prevalence among US adults and adolescents. Design and methods Smoking status of all major human adolescent and adult movie characters in the top 100 box office hits from 1996 to 2004 (900 movies) was assessed, and smoking prevalence was examined by Motion Picture Association of America (MPAA) rating and year of release. Results The movies contained 5944 major characters, of whom 4911 were adults and 466 were adolescents. Among adult movie characters, the overall smoking prevalence was 20.6%; smoking was more common in men than in women (22.6% v 16.1%, respectively, p<0.001), and was related to MPAA rating category (26.9% for movies rated R (restricted, people aged <17 years require accompanying adult), 17.9% for PG‐13 (parents strongly cautioned that some material might be inappropriate for children) and 10.4% for G/PG (general audiences, all ages; parental guidance suggested for children), p<0.001). In 1996, the smoking prevalence for major adult movie characters (25.7%) was similar to that in the actual US population (24.7%). Smoking prevalence among adult movie characters declined to 18.4% in 2004 (p for trend <0.001), slightly below that for the US population for that year (20.9%). Examination of trends by MPAA rating showed that the downward trend in smoking among adult movie characters was statistically significant in movies rated G/PG and R, but not in those rated PG‐13. A downward trend over time was also found for smoking among adolescent movie characters. There was no smoking among adult characters in 43.3% of the movies; however, in 39% of the movies, smoking prevalence among adult characters was higher than that in the US adult population in the year of release. Conclusions Smoking prevalence among major adolescent and adult movie characters is declining, with the downward trend among adult characters weakest for PG‐13‐rated movies. Although many movies depict no adult smoking, more than one third depict smoking as more prevalent than that among US adults at the time of release. PMID:17130372

  2. HyDEn: A Hybrid Steganocryptographic Approach for Data Encryption Using Randomized Error-Correcting DNA Codes

    PubMed Central

    Regoui, Chaouki; Durand, Guillaume; Belliveau, Luc; Léger, Serge

    2013-01-01

    This paper presents a novel hybrid DNA encryption (HyDEn) approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach. PMID:23984392

  3. Neural Network and Letter Recognition.

    NASA Astrophysics Data System (ADS)

    Lee, Hue Yeon

    Neural net architectures and learning algorithms that recognize hand written 36 alphanumeric characters are studied. The thin line input patterns written in 32 x 32 binary array are used. The system is comprised of two major components, viz. a preprocessing unit and a Recognition unit. The preprocessing unit in turn consists of three layers of neurons; the U-layer, the V-layer, and the C -layer. The functions of the U-layer is to extract local features by template matching. The correlation between the detected local features are considered. Through correlating neurons in a plane with their neighboring neurons, the V-layer would thicken the on-cells or lines that are groups of on-cells of the previous layer. These two correlations would yield some deformation tolerance and some of the rotational tolerance of the system. The C-layer then compresses data through the 'Gabor' transform. Pattern dependent choice of center and wavelengths of 'Gabor' filters is the cause of shift and scale tolerance of the system. Three different learning schemes had been investigated in the recognition unit, namely; the error back propagation learning with hidden units, a simple perceptron learning, and a competitive learning. Their performances were analyzed and compared. Since sometimes the network fails to distinguish between two letters that are inherently similar, additional ambiguity resolving neural nets are introduced on top of the above main neural net. The two dimensional Fourier transform is used as the preprocessing and the perceptron is used as the recognition unit of the ambiguity resolver. One hundred different person's handwriting sets are collected. Some of these are used as the training sets and the remainders are used as the test sets. The correct recognition rate of the system increases with the number of training sets and eventually saturates at a certain value. Similar recognition rates are obtained for the above three different learning algorithms. The minimum error rate, 4.9% is achieved for alphanumeric sets when 50 sets are trained. With the ambiguity resolver, it is reduced to 2.5%. In case that only numeral sets are trained and tested, 2.0% error rate is achieved. When only alphabet sets are considered, the error rate is reduced to 1.1%.

  4. Reversal of photon-scattering errors in atomic qubits.

    PubMed

    Akerman, N; Kotler, S; Glickman, Y; Ozeri, R

    2012-09-07

    Spontaneous photon scattering by an atomic qubit is a notable example of environment-induced error and is a fundamental limit to the fidelity of quantum operations. In the scattering process, the qubit loses its distinctive and coherent character owing to its entanglement with the photon. Using a single trapped ion, we show that by utilizing the information carried by the photon, we are able to coherently reverse this process and correct for the scattering error. We further used quantum process tomography to characterize the photon-scattering error and its correction scheme and demonstrate a correction fidelity greater than 85% whenever a photon was measured.

  5. Character combinations, convergence and diversification in ectoparasitic arthropods.

    PubMed

    Poulin, Robert

    2009-08-01

    Different lineages of organisms diversify over time at different rates, in part as a consequence of the characteristics of the species in these lineages. Certain suites of traits possessed by species within a clade may determine rates of diversification, with some particular combinations of characters acting synergistically to either limit or promote diversification; the most successful combinations may also emerge repeatedly in different clades via convergent evolution. Here, the association between species characters and diversification is investigated amongst 21 independent lineages of arthropods ectoparasitic on vertebrate hosts. Using nine characters (each with two to four states) that capture general life history strategy, transmission mode and host-parasite interaction, each lineage was described by the set of character states it possesses. The results show, firstly, that most possible pair-wise combinations of character states have been adopted at least once, sometimes several times independently by different lineages; thus, ectoparasitic arthropods have explored most of the life history character space available to them. Secondly, lineages possessing commonly observed combinations of character states are not necessarily the ones that have experienced the highest rates of diversification (measured as a clade's species-per-genus ratio). Thirdly, some specific traits are associated with higher rates of diversification. Using more than one host per generation, laying eggs away from the host and intermediate levels of fecundity are features that appear to have promoted diversification. These findings indicate that particular species characters may be evolutionary drivers of diversity, whose effects could also apply in other taxa.

  6. [Localization of scotomas in AMD by reading test : Random series of words in standardized format].

    PubMed

    Eisenbarth, W; Pado, U; Schriever, S; Schötschel, D; Feucht, N; MacKeben, M

    2016-09-01

    Reading performance that can be measured by reading tests depends on whether reading material with or without contextual continuity is used. The goal of this study was to create a German version of the SKread test and to evaluate it in a clinical setting. The evaluation of the SKread test was first performed on two groups of visually healthy subjects of different ages: a junior group of 25 persons with ages between 20 and 30 years (mean = 25.84 years, SD ± 2.41 years) and a senior group of 25 persons with ages between 51 and 84 years (mean = 62.40 ± 8.46 years). The same measurements were also performed on a group of 18 patients with age-related macular degeneration (AMD) with ages between 75 and 95 years (mean = 81.89 ± 5.48 years). The reading performance was also measured using Radner charts. Using reading material without syntactic continuity considerably slowed down the reading speed and increased the error rate. Median reading rates of 11.53 characters/s (CPS) for the junior group and 8.96 CPS for the senior group were clearly lower than those for the Radner charts (22.02 CPS and 18.48 CPS, respectively). In the AMD patients, a statistical analysis of the error rates showed a highly significant difference between the Radner charts and the SKread test (p = 0.00014). Furthermore, by analyzing the errors made in the SKread test information could be obtained about the position of central scotomas. The test-retest reliability of the SKread was very good. Information about the position of a central scotoma can be acquired by using the SKread test and an analysis of reading errors, which can augment effective clinical monitoring in AMD and subsequent visual rehabilitation.

  7. Effect of pattern complexity on the visual span for Chinese and alphabet characters

    PubMed Central

    Wang, Hui; He, Xuanzi; Legge, Gordon E.

    2014-01-01

    The visual span for reading is the number of letters that can be recognized without moving the eyes and is hypothesized to impose a sensory limitation on reading speed. Factors affecting the size of the visual span have been studied using alphabet letters. There may be common constraints applying to recognition of other scripts. The aim of this study was to extend the concept of the visual span to Chinese characters and to examine the effect of the greater complexity of these characters. We measured visual spans for Chinese characters and alphabet letters in the central vision of bilingual subjects. Perimetric complexity was used as a metric to quantify the pattern complexity of binary character images. The visual span tests were conducted with four sets of stimuli differing in complexity—lowercase alphabet letters and three groups of Chinese characters. We found that the size of visual spans decreased with increasing complexity, ranging from 10.5 characters for alphabet letters to 4.5 characters for the most complex Chinese characters studied. A decomposition analysis revealed that crowding was the dominant factor limiting the size of the visual span, and the amount of crowding increased with complexity. Errors in the spatial arrangement of characters (mislocations) had a secondary effect. We conclude that pattern complexity has a major effect on the size of the visual span, mediated in large part by crowding. Measuring the visual span for Chinese characters is likely to have high relevance to understanding visual constraints on Chinese reading performance. PMID:24993020

  8. A tripartite taxonomy of character: Evidence for intrapersonal, interpersonal, and intellectual competencies in children

    PubMed Central

    Park, Daeun; Tsukayama, Eli; Goodwin, Geoffrey P.; Patrick, Sarah; Duckworth, Angela L.

    2017-01-01

    Other than cognitive ability, what competencies should schools promote in children? How are they organized, and to what extent do they predict consequential outcomes? Separate theoretical traditions have suggested interpersonal, intrapersonal, and intellectual dimensions, reflecting how children relate to other people, manage their own goals and impulses, and engage with ideas, respectively. However, very little work has examined character empirically. In the current investigation, we partnered with middle schools that had previously identified character strengths relevant in their communities. Across three longitudinal, prospective studies, we examined the factor structure of character, associations with intelligence and Big Five personality traits, and predictive validity for consequential outcomes like peer relations, class participation, and report card grades. In Study 1, teachers rated their students on behaviors exemplifying character strengths as they played out in students’ daily lives. Exploratory factor analyses yielded a three-factor structure consisting of interpersonal (interpersonal self-control, gratitude, social intelligence), intellectual (zest, curiosity), and intrapersonal (academic self-control, grit) factors of character. In Study 2, children rated their own behavior and completed a test of cognitive ability. Confirmatory factor analyses supported the same three-factor structure, and these factors were only weakly associated with cognitive ability. In Study 3, teachers provided character ratings; in parallel, students completed measures of character as well as Big Five personality factors. As expected, intellectual, interpersonal, and intrapersonal character factors related to Big Five openness to experience, agreeableness, and conscientiousness, respectively. Across studies, positive peer relations were most consistently predicted by interpersonal character, class participation by intellectual character, and report card grades by intrapersonal character. Collectively, our findings support a tripartite taxonomy of character in the school context. PMID:29051684

  9. A tripartite taxonomy of character: Evidence for intrapersonal, interpersonal, and intellectual competencies in children.

    PubMed

    Park, Daeun; Tsukayama, Eli; Goodwin, Geoffrey P; Patrick, Sarah; Duckworth, Angela L

    2017-01-01

    Other than cognitive ability, what competencies should schools promote in children? How are they organized, and to what extent do they predict consequential outcomes? Separate theoretical traditions have suggested interpersonal, intrapersonal, and intellectual dimensions, reflecting how children relate to other people, manage their own goals and impulses, and engage with ideas, respectively. However, very little work has examined character empirically. In the current investigation, we partnered with middle schools that had previously identified character strengths relevant in their communities. Across three longitudinal, prospective studies, we examined the factor structure of character, associations with intelligence and Big Five personality traits, and predictive validity for consequential outcomes like peer relations, class participation, and report card grades. In Study 1, teachers rated their students on behaviors exemplifying character strengths as they played out in students' daily lives. Exploratory factor analyses yielded a three-factor structure consisting of interpersonal (interpersonal self-control, gratitude, social intelligence), intellectual (zest, curiosity), and intrapersonal (academic self-control, grit) factors of character. In Study 2, children rated their own behavior and completed a test of cognitive ability. Confirmatory factor analyses supported the same three-factor structure, and these factors were only weakly associated with cognitive ability. In Study 3, teachers provided character ratings; in parallel, students completed measures of character as well as Big Five personality factors. As expected, intellectual, interpersonal, and intrapersonal character factors related to Big Five openness to experience, agreeableness, and conscientiousness, respectively. Across studies, positive peer relations were most consistently predicted by interpersonal character, class participation by intellectual character, and report card grades by intrapersonal character. Collectively, our findings support a tripartite taxonomy of character in the school context.

  10. Ancestral state reconstruction, rate heterogeneity, and the evolution of reptile viviparity.

    PubMed

    King, Benedict; Lee, Michael S Y

    2015-05-01

    Virtually all models for reconstructing ancestral states for discrete characters make the crucial assumption that the trait of interest evolves at a uniform rate across the entire tree. However, this assumption is unlikely to hold in many situations, particularly as ancestral state reconstructions are being performed on increasingly large phylogenies. Here, we show how failure to account for such variable evolutionary rates can cause highly anomalous (and likely incorrect) results, while three methods that accommodate rate variability yield the opposite, more plausible, and more robust reconstructions. The random local clock method, implemented in BEAST, estimates the position and magnitude of rate changes on the tree; split BiSSE estimates separate rate parameters for pre-specified clades; and the hidden rates model partitions each character state into a number of rate categories. Simulations show the inadequacy of traditional models when characters evolve with both asymmetry (different rates of change between states within a character) and heterotachy (different rates of character evolution across different clades). The importance of accounting for rate heterogeneity in ancestral state reconstruction is highlighted empirically with a new analysis of the evolution of viviparity in squamate reptiles, which reveal a predominance of forward (oviparous-viviparous) transitions and very few reversals. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Rating of personality disorder features in popular movie characters.

    PubMed

    Hesse, Morten; Schliewe, Sanna; Thomsen, Rasmus R

    2005-12-08

    Tools for training professionals in rating personality disorders are few. We present one such tool: rating of fictional persons. However, before ratings of fictional persons can be useful, we need to know whether raters get the same results, when rating fictional characters. Psychology students at the University of Copenhagen (N = 8) rated four different movie characters from four movies based on three systems: Global rating scales representing each of the 10 personality disorders in the DSM-IV, a criterion list of all criteria for all DSM-IV personality disorders in random order, and the Ten Item Personality Inventory for rating the five-factor model. Agreement was estimated based on intraclass-correlation. Agreement for rating scales for personality disorders ranged from 0.04 to 0.54. For personality disorder features based on DSM-IV criteria, agreement ranged from 0.24 to 0.89, and agreement for the five-factor model ranged from 0.05 to 0.88. The largest multivariate effect was observed for criteria count followed by the TIPI, followed by rating scales. Raters experienced personality disorder criteria as the easiest, and global personality disorder scales as the most difficult, but with significant variation between movies. Psychology students with limited or no clinical experience can agree well on the personality traits of movie characters based on watching the movie. Rating movie characters may be a way to practice assessment of personality.

  12. Rating of personality disorder features in popular movie characters

    PubMed Central

    Hesse, Morten; Schliewe, Sanna; Thomsen, Rasmus R

    2005-01-01

    Background Tools for training professionals in rating personality disorders are few. We present one such tool: rating of fictional persons. However, before ratings of fictional persons can be useful, we need to know whether raters get the same results, when rating fictional characters. Method Psychology students at the University of Copenhagen (N = 8) rated four different movie characters from four movies based on three systems: Global rating scales representing each of the 10 personality disorders in the DSM-IV, a criterion list of all criteria for all DSM-IV personality disorders in random order, and the Ten Item Personality Inventory for rating the five-factor model. Agreement was estimated based on intraclass-correlation. Results Agreement for rating scales for personality disorders ranged from 0.04 to 0.54. For personality disorder features based on DSM-IV criteria, agreement ranged from 0.24 to 0.89, and agreement for the five-factor model ranged from 0.05 to 0.88. The largest multivariate effect was observed for criteria count followed by the TIPI, followed by rating scales. Raters experienced personality disorder criteria as the easiest, and global personality disorder scales as the most difficult, but with significant variation between movies. Conclusion Psychology students with limited or no clinical experience can agree well on the personality traits of movie characters based on watching the movie. Rating movie characters may be a way to practice assessment of personality. PMID:16336663

  13. The effects of intermittent illumination on a visual inspection task.

    PubMed

    Kennedy, A; Brysbaert, M; Murray, W S

    1998-02-01

    Two experiments are described in which eye movements were monitored as subjects performed a simple target-spotting task under conditions of intermittent illumination produced by varying the display-screen frame rate on a computer VDU. In Experiment 1, subjects executed a saccade from a fixation point to a target which appeared randomly at a fixed eccentricity of 14 character positions to the left or right. Saccade latency did not differ reliably as a function of screen refresh rate, but average saccade extent at 70 Hz and 110 Hz was reliably shorter than at 90 Hz and 100 Hz. Experiment 2 examined the same task using a range of target eccentricities (7, 14, and 28 character positions to the left and right) and across a wider range of screen refresh rates. The results confirmed the curvilinear relationship obtained in Experiment 1, with average saccade extent reliably shorter at refresh rates of 50 Hz and 125 Hz than at 75 Hz and 100 Hz. While the effect was greater for remote targets, analyses of the proportional target error failed to show a reliable interaction between target eccentricity and display refresh rate. In contrast to Experiment 1, there was a pronounced effect of refresh rate on saccade latency (corrected for time to write the screen frame), with shorter latencies at higher refresh rates. It may be concluded that pulsation at frequencies above fusion disrupts saccade control. However, the curvilinear functional relationship between screen refresh rate and saccade extent obtained in these studies differs from previously reported effects of intermittent illumination on the average size of "entry saccades" (the first saccade to enter a given word) in a task involving word identification (Kennedy & Murray, 1993a, 1996). This conflict of data may arise in part because within-word adjustments in viewing position, which are typical of normal reading, influence measures of average saccade extent.

  14. Teaching identity matching of braille characters to beginning braille readers.

    PubMed

    Toussaint, Karen A; Scheithauer, Mindy C; Tiger, Jeffrey H; Saunders, Kathryn J

    2017-04-01

    We taught three children with visual impairments to make tactile discriminations of the braille alphabet within a matching-to-sample format. That is, we presented participants with a braille character as a sample stimulus, and they selected the matching stimulus from a three-comparison array. In order to minimize participant errors, we initially arranged braille characters into training sets in which there was a maximum difference in the number of dots comprising the target and nontarget comparison stimuli. As participants mastered these discriminations, we increased the similarity between target and nontarget comparisons (i.e., an approximation of stimulus fading). All three participants' accuracy systematically increased following the introduction of this identity-matching procedure. © 2017 Society for the Experimental Analysis of Behavior.

  15. An Approach for Implementing a Microcomputer Based Report Origination System in the Ada Programming Language

    DTIC Science & Technology

    1983-03-01

    Decision Tree -------------------- 62 4-E. PACKAGE unitrep Action/Area Selection flow Chart 82 4-7. PACKAGE unitrep Control Flow Chart...the originetor wculd manually draft simple, readable, formatted iressages using "-i predef.ined forms and decision logic trees . This alternative was...Study Analysis DATA CCNTENT ERRORS PERCENT OF ERRORS Character Type 2.1 Calcvlations/Associations 14.3 Message Identification 4.? Value Pisiratch 22.E

  16. Slant rectification in Russian passport OCR system using fast Hough transform

    NASA Astrophysics Data System (ADS)

    Limonova, Elena; Bezmaternykh, Pavel; Nikolaev, Dmitry; Arlazarov, Vladimir

    2017-03-01

    In this paper, we introduce slant detection method based on Fast Hough Transform calculation and demonstrate its application in industrial system for Russian passports recognition. About 1.5% of this kind of documents appear to be slant or italic. This fact reduces recognition rate, because Optical Recognition Systems are normally designed to process normal fonts. Our method uses Fast Hough Transform to analyse vertical strokes of characters extracted with the help of x-derivative of a text line image. To improve the quality of detector we also introduce field grouping rules. The resulting algorithm allowed to reach high detection quality. Almost all errors of considered approach happen on passports of nonstandard fonts, while slant detector works in appropriate way.

  17. Kannada character recognition system using neural network

    NASA Astrophysics Data System (ADS)

    Kumar, Suresh D. S.; Kamalapuram, Srinivasa K.; Kumar, Ajay B. R.

    2013-03-01

    Handwriting recognition has been one of the active and challenging research areas in the field of pattern recognition. It has numerous applications which include, reading aid for blind, bank cheques and conversion of any hand written document into structural text form. As there is no sufficient number of works on Indian language character recognition especially Kannada script among 15 major scripts in India. In this paper an attempt is made to recognize handwritten Kannada characters using Feed Forward neural networks. A handwritten Kannada character is resized into 20x30 Pixel. The resized character is used for training the neural network. Once the training process is completed the same character is given as input to the neural network with different set of neurons in hidden layer and their recognition accuracy rate for different Kannada characters has been calculated and compared. The results show that the proposed system yields good recognition accuracy rates comparable to that of other handwritten character recognition systems.

  18. The impact of OCR accuracy on automated cancer classification of pathology reports.

    PubMed

    Zuccon, Guido; Nguyen, Anthony N; Bergheim, Anton; Wickman, Sandra; Grayson, Narelle

    2012-01-01

    To evaluate the effects of Optical Character Recognition (OCR) on the automatic cancer classification of pathology reports. Scanned images of pathology reports were converted to electronic free-text using a commercial OCR system. A state-of-the-art cancer classification system, the Medical Text Extraction (MEDTEX) system, was used to automatically classify the OCR reports. Classifications produced by MEDTEX on the OCR versions of the reports were compared with the classification from a human amended version of the OCR reports. The employed OCR system was found to recognise scanned pathology reports with up to 99.12% character accuracy and up to 98.95% word accuracy. Errors in the OCR processing were found to minimally impact on the automatic classification of scanned pathology reports into notifiable groups. However, the impact of OCR errors is not negligible when considering the extraction of cancer notification items, such as primary site, histological type, etc. The automatic cancer classification system used in this work, MEDTEX, has proven to be robust to errors produced by the acquisition of freetext pathology reports from scanned images through OCR software. However, issues emerge when considering the extraction of cancer notification items.

  19. Data Link Test and Analysis System/ATCRBS Transponder Test System

    DTIC Science & Technology

    1990-05-01

    cc pcinit() pcinit.cc plotdato( plotdat. cc plotmsgo( sumscrn *cc plotqueo( sumscrn. cc pmsg () pmsg.cc pr_head() prhead.cc Virt init() print.cc pw...The code data from the Decode Status word from hardware. 2 a data-> wind nm - The reply window that this data came from. mvarms) NAME mparms - Modify...character is entered. m5sg() NAME pmsg - Print error message. Source file - pmsg.cc. FUNCTION CALL pmsg (ecode, str) int ecode; /* Error code */ char

  20. Valence, arousal, familiarity, concreteness, and imageability ratings for 292 two-character Chinese nouns in Cantonese speakers in Hong Kong.

    PubMed

    Yee, Lydia T S

    2017-01-01

    Words are frequently used as stimuli in cognitive psychology experiments, for example, in recognition memory studies. In these experiments, it is often desirable to control for the words' psycholinguistic properties because differences in such properties across experimental conditions might introduce undesirable confounds. In order to avoid confounds, studies typically check to see if various affective and lexico-semantic properties are matched across experimental conditions, and so databases that contain values for these properties are needed. While word ratings for these variables exist in English and other European languages, ratings for Chinese words are not comprehensive. In particular, while ratings for single characters exist, ratings for two-character words-which often have different meanings than their constituent characters, are scarce. In this study, ratings for 292 two-character Chinese nouns were obtained from Cantonese speakers in Hong Kong. Affective variables, including valence and arousal, and lexico-semantic variables, including familiarity, concreteness, and imageability, were rated in the study. The words were selected from a film subtitle database containing word frequency information that could be extracted and listed alongside the resulting ratings. Overall, the subjective ratings showed good reliability across all rated dimensions, as well as good reliability within and between the different groups of participants who each rated a subset of the words. Moreover, several well-established relationships between the variables found consistently in other languages were also observed in this study, demonstrating that the ratings are valid. The resulting word database can be used in studies where control for the above psycholinguistic variables is critical to the research design.

  1. Data entry and error embedding system

    NASA Technical Reports Server (NTRS)

    Woo, Daniel N. (Inventor); Woo, Jr., John (Inventor)

    1998-01-01

    A data entry and error embedding system in which, first, a document is bitmapped and recorded in a first memory. Then, it is displayed, and portions of it to be replicated by data entry are underlayed by a window, into which window replicated data is entered in location and size such that it is juxtaposed just below that which is replicated, enhancing the accuracy of replication. Second, with this format in place, selected portions of the replicated data are altered by the insertion of character or word substitutions, thus the embedding of errors. Finally, a proofreader would endeavor to correct the error embedded data and a record of his or her changes recorded. In this manner, the skill level of the proofreader and accuracy of the data are computed.

  2. Development of a hybrid mental speller combining EEG-based brain-computer interface and webcam-based eye-tracking.

    PubMed

    Lee, Jun-Hak; Lim, Jeong-Hwan; Hwang, Han-Jeong; Im, Chang-Hwan

    2013-01-01

    The main goal of this study was to develop a hybrid mental spelling system combining a steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) technology and a webcam-based eye-tracker, which utilizes information from the brain electrical activity and eye gaze direction at the same time. In the hybrid mental spelling system, a character decoded using SSVEP was not typed if the position of the selected character was not matched with the eye direction information ('left' or 'right') obtained from the eye-tracker. Thus, the users did not need to correct a misspelled character using a 'BACKSPACE' key. To verify the feasibility of the developed hybrid mental spelling system, we conducted online experiments with ten healthy participants. Each participant was asked to type 15 English words consisting of 68 characters. As a result, 16.6 typing errors could be prevented on average, demonstrating that the implemented hybrid mental spelling system could enhance the practicality of our mental spelling system.

  3. Effects of Word Width and Word Length on Optimal Character Size for Reading of Horizontally Scrolling Japanese Words

    PubMed Central

    Teramoto, Wataru; Nakazaki, Takuyuki; Sekiyama, Kaoru; Mori, Shuji

    2016-01-01

    The present study investigated, whether word width and length affect the optimal character size for reading of horizontally scrolling Japanese words, using reading speed as a measure. In Experiment 1, three Japanese words, each consisting of four Hiragana characters, sequentially scrolled on a display screen from right to left. Participants, all Japanese native speakers, were instructed to read the words aloud as accurately as possible, irrespective of their order within the sequence. To quantitatively measure their reading performance, we used rapid serial visual presentation paradigm, where the scrolling rate was increased until the participants began to make mistakes. Thus, the highest scrolling rate at which the participants’ performance exceeded 88.9% correct rate was calculated for each character size (0.3°, 0.6°, 1.0°, and 3.0°) and scroll window size (5 or 10 character spaces). Results showed that the reading performance was highest in the range of 0.6° to 1.0°, irrespective of the scroll window size. Experiment 2 investigated whether the optimal character size observed in Experiment 1 was applicable for any word width and word length (i.e., the number of characters in a word). Results showed that reading speeds were slower for longer than shorter words and the word width of 3.6° was optimal among the word lengths tested (three, four, and six character words). Considering that character size varied depending on word width and word length in the present study, this means that the optimal character size can be changed by word width and word length in scrolling Japanese words. PMID:26909052

  4. Effects of Word Width and Word Length on Optimal Character Size for Reading of Horizontally Scrolling Japanese Words.

    PubMed

    Teramoto, Wataru; Nakazaki, Takuyuki; Sekiyama, Kaoru; Mori, Shuji

    2016-01-01

    The present study investigated, whether word width and length affect the optimal character size for reading of horizontally scrolling Japanese words, using reading speed as a measure. In Experiment 1, three Japanese words, each consisting of four Hiragana characters, sequentially scrolled on a display screen from right to left. Participants, all Japanese native speakers, were instructed to read the words aloud as accurately as possible, irrespective of their order within the sequence. To quantitatively measure their reading performance, we used rapid serial visual presentation paradigm, where the scrolling rate was increased until the participants began to make mistakes. Thus, the highest scrolling rate at which the participants' performance exceeded 88.9% correct rate was calculated for each character size (0.3°, 0.6°, 1.0°, and 3.0°) and scroll window size (5 or 10 character spaces). Results showed that the reading performance was highest in the range of 0.6° to 1.0°, irrespective of the scroll window size. Experiment 2 investigated whether the optimal character size observed in Experiment 1 was applicable for any word width and word length (i.e., the number of characters in a word). Results showed that reading speeds were slower for longer than shorter words and the word width of 3.6° was optimal among the word lengths tested (three, four, and six character words). Considering that character size varied depending on word width and word length in the present study, this means that the optimal character size can be changed by word width and word length in scrolling Japanese words.

  5. Risk behaviours for organism transmission in health care delivery-A two month unstructured observational study.

    PubMed

    Lindberg, Maria; Lindberg, Magnus; Skytt, Bernice

    2017-05-01

    Errors in infection control practices risk patient safety. The probability for errors can increase when care practices become more multifaceted. It is therefore fundamental to track risk behaviours and potential errors in various care situations. The aim of this study was to describe care situations involving risk behaviours for organism transmission that could lead to subsequent healthcare-associated infections. Unstructured nonparticipant observations were performed at three medical wards. Healthcare personnel (n=27) were shadowed, in total 39h, on randomly selected weekdays between 7:30 am and 12 noon. Content analysis was used to inductively categorize activities into tasks and based on the character into groups. Risk behaviours for organism transmission were deductively classified into types of errors. Multiple response crosstabs procedure was used to visualize the number and proportion of errors in tasks. One-Way ANOVA with Bonferroni post Hoc test was used to determine differences among the three groups of activities. The qualitative findings gives an understanding of that risk behaviours for organism transmission goes beyond the five moments of hand hygiene and also includes the handling and placement of materials and equipment. The tasks with the highest percentage of errors were; 'personal hygiene', 'elimination' and 'dressing/wound care'. The most common types of errors in all identified tasks were; 'hand disinfection', 'glove usage', and 'placement of materials'. Significantly more errors (p<0.0001) were observed the more multifaceted (single, combined or interrupted) the activity was. The numbers and types of errors as well as the character of activities performed in care situations described in this study confirm the need to improve current infection control practices. It is fundamental that healthcare personnel practice good hand hygiene however effective preventive hygiene is complex in healthcare activities due to the multifaceted care situations, especially when activities are interrupted. A deeper understanding of infection control practices that goes beyond the sense of security by means of hand disinfection and use of gloves is needed as materials and surfaces in the care environment might be contaminated and thus pose a risk for organism transmission. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. The Influence of Brand Equity Characters on Children's Food Preferences and Choices.

    PubMed

    McGale, Lauren Sophie; Halford, Jason Christian Grovenor; Harrold, Joanne Alison; Boyland, Emma Jane

    2016-10-01

    To assess the influence of brand equity characters displayed on food packaging on children's food preferences and choices, 2 studies were conducted. Brand equity characters are developed specifically to represent a particular brand or product. Despite existing literature suggesting that promotional characters influence children's food choices, to date, no research has assessed the influence of brand equity characters specifically. We recruited 209 children 4-8 years of age from schools and childcare centers in the UK. In a mixed-measures design, the children were asked to rate their taste preferences and preferred snack choice for 3 matched food pairs, presented either with or without a brand equity character displayed on packaging. Study 1 addressed congruent food-character associations and study 2 addressed incongruent associations. Participants were also asked to rate their recognition and liking of characters used. Wilcoxon signed-rank tests and χ(2) analyses were used where appropriate. Children were significantly more likely to show a preference for foods with a brand equity character displayed on the packaging compared with a matched food without a brand equity character, for both congruent and incongruent food-character associations. The presence of a brand equity character also significantly influenced the children's within-pair preferences, within-pair choices, and overall snack choice (congruent associations only). Displaying brand equity characters promotes unhealthy food choices in children. The findings are consistent with those of studies exploring other types of promotional characters. In the context of a childhood obesity epidemic, the use of brand equity characters in the promotion of foods high in fat, salt, and sugar to children should be restricted. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Evolution of Fish-Shaped Reptiles (reptilia: Ichthyopterygia) in Their Physical Environments and Constraints

    NASA Astrophysics Data System (ADS)

    Motani, Ryosuke

    2005-01-01

    Ichthyosaurs were a group of Mesozoic marine reptiles that evolved fish-shaped body outlines. They are unique in several anatomical characters, including the possession of enormous eyeballs sometimes exceeding 25 cm and an enlarged manus with sometimes up to 20 bones in a single digit, or 10 digits per manus. They are also unique in that their biology has been studied from the perspective of physical constraints, which allowed estimation of such characteristics as optimal cruising speed, visual sensitivity, and even possible basal metabolic rate ranges. These functional inferences, although based on physical principles, obviously contain errors arising from the limitations of fossilized data, but are necessarily stronger than the commonly made inferences based on superficial correlations among quantities without mechanical or optical explanations for why such correlations exist.

  8. 76 FR 39757 - Filing Procedures

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-06

    ... an optical character recognition process, such a document may contain recognition errors. CAUTION... network speed e-filing of these documents may be difficult. Pursuant to section II(C) above, the Secretary... optical scan format or a typed ``electronic signature,'' e.g., ``/s/Jane Doe.'' (3) In the case of a...

  9. Strategy of restraining ripple error on surface for optical fabrication.

    PubMed

    Wang, Tan; Cheng, Haobo; Feng, Yunpeng; Tam, Honyuen

    2014-09-10

    The influence from the ripple error to the high imaging quality is effectively reduced by restraining the ripple height. A method based on the process parameters and the surface error distribution is designed to suppress the ripple height in this paper. The generating mechanism of the ripple error is analyzed by polishing theory with uniform removal character. The relation between the processing parameters (removal functions, pitch of path, and dwell time) and the ripple error is discussed through simulations. With these, the strategy for diminishing the error is presented. A final process is designed and demonstrated on K9 work-pieces using the optimizing strategy with magnetorheological jet polishing. The form error on the surface is decreased from 0.216λ PV (λ=632.8  nm) and 0.039λ RMS to 0.03λ PV and 0.004λ RMS. And the ripple error is restrained well at the same time, because the ripple height is less than 6 nm on the final surface. Results indicate that these strategies are suitable for high-precision optical manufacturing.

  10. TELLTALE: Experiments in a Dynamic Hypertext Environment for Degraded and Multilingual Data.

    ERIC Educational Resources Information Center

    Pearce, Claudia; Nicholas, Charles

    1996-01-01

    Presents experimentation results for the TELLTALE system, a dynamic hypertext environment that provides full-text search from a hypertext-style user interface for text corpora that may be garbled by OCR (optical character recognition) or transmission errors, and that may contain languages other than English. (Author/LRW)

  11. Good character at school: positive classroom behavior mediates the link between character strengths and school achievement

    PubMed Central

    Wagner, Lisa; Ruch, Willibald

    2015-01-01

    Character strengths have been found to be substantially related to children’s and adolescents’ well-being. Initial evidence suggests that they also matter for school success (e.g., Weber and Ruch, 2012). The present set of two studies aimed at replicating and extending these findings in two different age groups, primary school students (N = 179; mean age = 11.6 years) and secondary school students (N = 199; mean age = 14.4 years). The students completed the VIA-Youth (Values in Action Inventory of Strengths for Youth), a self-report measure of the 24 character strengths in the VIA classification. Their teachers rated the students’ positive behavior in the classroom. Additionally, school achievement was assessed: For the primary school students (Study 1), teachers rated the students’ overall school achievement and for the secondary school students (Study 2), we used their grades as a measure of school achievement. We found that several character strengths were associated with both positive classroom behavior and school achievement. Across both samples, school achievement was correlated with love of learning, perseverance, zest, gratitude, hope, and perspective. The strongest correlations with positive classroom behavior were found for perseverance, self-regulation, prudence, social intelligence, and hope. For both samples, there were indirect effects of some of the character strengths on school achievement through teacher-rated positive classroom behavior. The converging findings from the two samples support the notion that character strengths contribute to positive classroom behavior, which in turn enhances school achievement. Results are discussed in terms of their implications for future research and for school interventions based on character strengths. PMID:26029144

  12. Phylogenetic comparative methods complement discriminant function analysis in ecomorphology.

    PubMed

    Barr, W Andrew; Scott, Robert S

    2014-04-01

    In ecomorphology, Discriminant Function Analysis (DFA) has been used as evidence for the presence of functional links between morphometric variables and ecological categories. Here we conduct simulations of characters containing phylogenetic signal to explore the performance of DFA under a variety of conditions. Characters were simulated using a phylogeny of extant antelope species from known habitats. Characters were modeled with no biomechanical relationship to the habitat category; the only sources of variation were body mass, phylogenetic signal, or random "noise." DFA on the discriminability of habitat categories was performed using subsets of the simulated characters, and Phylogenetic Generalized Least Squares (PGLS) was performed for each character. Analyses were repeated with randomized habitat assignments. When simulated characters lacked phylogenetic signal and/or habitat assignments were random, <5.6% of DFAs and <8.26% of PGLS analyses were significant. When characters contained phylogenetic signal and actual habitats were used, 33.27 to 45.07% of DFAs and <13.09% of PGLS analyses were significant. False Discovery Rate (FDR) corrections for multiple PGLS analyses reduced the rate of significance to <4.64%. In all cases using actual habitats and characters with phylogenetic signal, correct classification rates of DFAs exceeded random chance. In simulations involving phylogenetic signal in both predictor variables and predicted categories, PGLS with FDR was rarely significant, while DFA often was. In short, DFA offered no indication that differences between categories might be explained by phylogenetic signal, while PGLS did. As such, PGLS provides a valuable tool for testing the functional hypotheses at the heart of ecomorphology. Copyright © 2013 Wiley Periodicals, Inc.

  13. Good character at school: positive classroom behavior mediates the link between character strengths and school achievement.

    PubMed

    Wagner, Lisa; Ruch, Willibald

    2015-01-01

    Character strengths have been found to be substantially related to children's and adolescents' well-being. Initial evidence suggests that they also matter for school success (e.g., Weber and Ruch, 2012). The present set of two studies aimed at replicating and extending these findings in two different age groups, primary school students (N = 179; mean age = 11.6 years) and secondary school students (N = 199; mean age = 14.4 years). The students completed the VIA-Youth (Values in Action Inventory of Strengths for Youth), a self-report measure of the 24 character strengths in the VIA classification. Their teachers rated the students' positive behavior in the classroom. Additionally, school achievement was assessed: For the primary school students (Study 1), teachers rated the students' overall school achievement and for the secondary school students (Study 2), we used their grades as a measure of school achievement. We found that several character strengths were associated with both positive classroom behavior and school achievement. Across both samples, school achievement was correlated with love of learning, perseverance, zest, gratitude, hope, and perspective. The strongest correlations with positive classroom behavior were found for perseverance, self-regulation, prudence, social intelligence, and hope. For both samples, there were indirect effects of some of the character strengths on school achievement through teacher-rated positive classroom behavior. The converging findings from the two samples support the notion that character strengths contribute to positive classroom behavior, which in turn enhances school achievement. Results are discussed in terms of their implications for future research and for school interventions based on character strengths.

  14. Iterative cross section sequence graph for handwritten character segmentation.

    PubMed

    Dawoud, Amer

    2007-08-01

    The iterative cross section sequence graph (ICSSG) is an algorithm for handwritten character segmentation. It expands the cross section sequence graph concept by applying it iteratively at equally spaced thresholds. The iterative thresholding reduces the effect of information loss associated with image binarization. ICSSG preserves the characters' skeletal structure by preventing the interference of pixels that causes flooding of adjacent characters' segments. Improving the structural quality of the characters' skeleton facilitates better feature extraction and classification, which improves the overall performance of optical character recognition (OCR). Experimental results showed significant improvements in OCR recognition rates compared to other well-established segmentation algorithms.

  15. THE INFLUENCE OF DOM CHARACTER ON OZONE DECOMPOSITION RATES AND RCT

    EPA Science Inventory

    The effects of DOM character on ozonation of natural waters and solutions of DOM isolates were investigated. Batch kinetic investigations measured O3 decomposition rate constants and Rct values. Rct describes the ratio of ?OH concentration to O3 concentration, and thus provides...

  16. Sherlock Holmes Meets Othello: A MDS Analysis of Literary Characters.

    ERIC Educational Resources Information Center

    Russell, G. W.; Lambert, W. B.

    1980-01-01

    Changes in college freshmen's perceptual organization of characters from "Othello" after three weeks of study and lecture were assessed using multidimensional scaling procedures. Sherlock Holmes experts also provided dissimilarity ratings of Conan Doyle's characters. Discussion centers on the extent to which the lectures on…

  17. Optimal rates for phylogenetic inference and experimental design in the era of genome-scale datasets.

    PubMed

    Dornburg, Alex; Su, Zhuo; Townsend, Jeffrey P

    2018-06-25

    With the rise of genome- scale datasets there has been a call for increased data scrutiny and careful selection of loci appropriate for attempting the resolution of a phylogenetic problem. Such loci are desired to maximize phylogenetic information content while minimizing the risk of homoplasy. Theory posits the existence of characters that evolve under such an optimum rate, and efforts to determine optimal rates of inference have been a cornerstone of phylogenetic experimental design for over two decades. However, both theoretical and empirical investigations of optimal rates have varied dramatically in their conclusions: spanning no relationship to a tight relationship between the rate of change and phylogenetic utility. Here we synthesize these apparently contradictory views, demonstrating both empirical and theoretical conditions under which each is correct. We find that optimal rates of characters-not genes-are generally robust to most experimental design decisions. Moreover, consideration of site rate heterogeneity within a given locus is critical to accurate predictions of utility. Factors such as taxon sampling or the targeted number of characters providing support for a topology are additionally critical to the predictions of phylogenetic utility based on the rate of character change. Further, optimality of rates and predictions of phylogenetic utility are not equivalent, demonstrating the need for further development of comprehensive theory of phylogenetic experimental design.

  18. Pain judgements of patients' relatives: examining the use of social contract theory as theoretical framework.

    PubMed

    Kappesser, Judith; de C Williams, Amanda C

    2008-08-01

    Observer underestimation of others' pain was studied using a concept from evolutionary psychology: a cheater detection mechanism from social contract theory, applied to relatives and friends of chronic pain patients. 127 participants estimated characters' pain intensity and fairness of behaviour after reading four vignettes describing characters suffering from pain. Four cues were systematically varied: the character continuing or stopping liked tasks; continuing or stopping disliked tasks; availability of medical evidence; and pain intensity as rated by characters. Results revealed that pain intensity and the two behavioural variables had an effect on pain estimates: high pain self-reports and stopping all tasks led to high pain estimates; pain was estimated to be lowest when characters stopped disliked but continued with liked tasks. This combination was also rated least fair. Results support the use of social contract theory as a theoretical framework to explore pain judgements.

  19. Relationships between Implementing Character Education, Student Behavior, and Student Achievement

    ERIC Educational Resources Information Center

    Skaggs, Gary; Bodenhorn, Nancy

    2006-01-01

    Over a 4-year period, researchers measured several outcomes in 5 school districts initiating or enhancing character education programs. Based on student, teacher, and administrator surveys, there was a noticeable improvement in character-related behavior. In certain districts, suspension and drop-out rates also decreased after the implementation…

  20. Identifying heterogeneity in rates of morphological evolution: discrete character change in the evolution of lungfish (Sarcopterygii; Dipnoi).

    PubMed

    Lloyd, Graeme T; Wang, Steve C; Brusatte, Stephen L

    2012-02-01

    Quantifying rates of morphological evolution is important in many macroevolutionary studies, and critical when assessing possible adaptive radiations and episodes of punctuated equilibrium in the fossil record. However, studies of morphological rates of change have lagged behind those on taxonomic diversification, and most authors have focused on continuous characters and quantifying patterns of morphological rates over time. Here, we provide a phylogenetic approach, using discrete characters and three statistical tests to determine points on a cladogram (branches or entire clades) that are characterized by significantly high or low rates of change. These methods include a randomization approach that identifies branches with significantly high rates and likelihood ratio tests that pinpoint either branches or clades that have significantly higher or lower rates than the pooled rate of the remainder of the tree. As a test case for these methods, we analyze a discrete character dataset of lungfish, which have long been regarded as "living fossils" due to an apparent slowdown in rates since the Devonian. We find that morphological rates are highly heterogeneous across the phylogeny and recover a general pattern of decreasing rates along the phylogenetic backbone toward living taxa, from the Devonian until the present. Compared with previous work, we are able to report a more nuanced picture of lungfish evolution using these new methods. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  1. The systematic component of phylogenetic error as a function of taxonomic sampling under parsimony.

    PubMed

    Debry, Ronald W

    2005-06-01

    The effect of taxonomic sampling on phylogenetic accuracy under parsimony is examined by simulating nucleotide sequence evolution. Random error is minimized by using very large numbers of simulated characters. This allows estimation of the consistency behavior of parsimony, even for trees with up to 100 taxa. Data were simulated on 8 distinct 100-taxon model trees and analyzed as stratified subsets containing either 25 or 50 taxa, in addition to the full 100-taxon data set. Overall accuracy decreased in a majority of cases when taxa were added. However, the magnitude of change in the cases in which accuracy increased was larger than the magnitude of change in the cases in which accuracy decreased, so, on average, overall accuracy increased as more taxa were included. A stratified sampling scheme was used to assess accuracy for an initial subsample of 25 taxa. The 25-taxon analyses were compared to 50- and 100-taxon analyses that were pruned to include only the original 25 taxa. On average, accuracy for the 25 taxa was improved by taxon addition, but there was considerable variation in the degree of improvement among the model trees and across different rates of substitution.

  2. Fast title extraction method for business documents

    NASA Astrophysics Data System (ADS)

    Katsuyama, Yutaka; Naoi, Satoshi

    1997-04-01

    Conventional electronic document filing systems are inconvenient because the user must specify the keywords in each document for later searches. To solve this problem, automatic keyword extraction methods using natural language processing and character recognition have been developed. However, these methods are slow, especially for japanese documents. To develop a practical electronic document filing system, we focused on the extraction of keyword areas from a document by image processing. Our fast title extraction method can automatically extract titles as keywords from business documents. All character strings are evaluated for similarity by rating points associated with title similarity. We classified these points as four items: character sitting size, position of character strings, relative position among character strings, and string attribution. Finally, the character string that has the highest rating is selected as the title area. The character recognition process is carried out on the selected area. It is fast because this process must recognize a small number of patterns in the restricted area only, and not throughout the entire document. The mean performance of this method is an accuracy of about 91 percent and a 1.8 sec. processing time for an examination of 100 Japanese business documents.

  3. 7 CFR 97.121 - Corrected certificate-applicant's mistake.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... applicant of a clerical or typographical nature, or of minor character, or in the description of the variety (including, but not limited to, the use of a misleading variety name or a name assigned to a different... LABORATORY TESTING PROGRAMS PLANT VARIETY AND PROTECTION Correction of Errors in Certificate § 97.121...

  4. 12 CFR 208.3 - Application and conditions for membership in the Federal Reserve System.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...: (1) Financial condition and management. The financial history and condition of the applying bank and the general character of its management. (2) Capital. The adequacy of the bank's capital in accordance....3(c)(1); (iii) The application contains a material error or is otherwise deficient; or (iv) The...

  5. [Preserved ability to read aloud kanji idioms in left handed alexia].

    PubMed

    Suzuki, Taemi; Suzuki, Kyoko; Iizuka, Osamu; Endo, Keiko; Yamadori, Atushi; Mori, Eturou

    2004-08-01

    We report a 69-year-old left-handed man, who developed alexia after a right medial occipito-temporal lobe infarction. On admission to the rehabilitation department two months after the onset, neurological examination showed left hemianopia, left hemiparesis, decreased deep sensation on the left side, and alexia. A brain MRI demonstrated infarcts in the right medial occipito-temporal lobe and the splenium of the corpus callosum. Detailed neuropsychological examination was performed two months after the onset. The patient was alert and cooperative. His speech was fluent with some word-finding difficulty. Comprehension for spoken materials, repetition, and naming abilities were all preserved. Systematic examination for reading revealed that reading aloud was disturbed in both kanji and kana words. Reading comprehension was significantly better for kanji words than kana words. First, we examined the effects of number of characters in a word. The number of characters in a word didn't affect his reading performance. Second, his performance on reading aloud of usual kanji words was compared with that of kanji words representing idioms. A kanji idiom is different from usual kanji words, in which pronunciation of each character is selected from several options. Reading aloud kanji idioms was significantly better than usual kanji words. In addition, reaction time to complete reading a word was much shorter for kanji idioms than usual kanji. An analysis of qualitative features of errors revealed that most errors in kanji idiom reading were semantically similar to the correct answers, while many errors in usual kanji word reading were classified into "don't know" responses. These findings suggested that a kanji idiom was tightly connected to its pronunciation, which resulted in his much better performance for kanji idiom reading. Overlearning of a unique relationship between a kanji idiom and its pronunciation might modify neuronal organization for reading.

  6. The National Eutrophication Survey: lake characteristics and historical nutrient concentrations

    NASA Astrophysics Data System (ADS)

    Stachelek, Joseph; Ford, Chanse; Kincaid, Dustin; King, Katelyn; Miller, Heather; Nagelkirk, Ryan

    2018-01-01

    Historical ecological surveys serve as a baseline and provide context for contemporary research, yet many of these records are not preserved in a way that ensures their long-term usability. The National Eutrophication Survey (NES) database is currently only available as scans of the original reports (PDF files) with no embedded character information. This limits its searchability, machine readability, and the ability of current and future scientists to systematically evaluate its contents. The NES data were collected by the US Environmental Protection Agency between 1972 and 1975 as part of an effort to investigate eutrophication in freshwater lakes and reservoirs. Although several studies have manually transcribed small portions of the database in support of specific studies, there have been no systematic attempts to transcribe and preserve the database in its entirety. Here we use a combination of automated optical character recognition and manual quality assurance procedures to make these data available for analysis. The performance of the optical character recognition protocol was found to be linked to variation in the quality (clarity) of the original documents. For each of the four archival scanned reports, our quality assurance protocol found an error rate between 5.9 and 17 %. The goal of our approach was to strike a balance between efficiency and data quality by combining entry of data by hand with digital transcription technologies. The finished database contains information on the physical characteristics, hydrology, and water quality of about 800 lakes in the contiguous US (Stachelek et al.(2017), https://doi.org/10.5063/F1639MVD). Ultimately, this database could be combined with more recent studies to generate meta-analyses of water quality trends and spatial variation across the continental US.

  7. Optical character recognition of handwritten Arabic using hidden Markov models

    NASA Astrophysics Data System (ADS)

    Aulama, Mohannad M.; Natsheh, Asem M.; Abandah, Gheith A.; Olama, Mohammed M.

    2011-04-01

    The problem of optical character recognition (OCR) of handwritten Arabic has not received a satisfactory solution yet. In this paper, an Arabic OCR algorithm is developed based on Hidden Markov Models (HMMs) combined with the Viterbi algorithm, which results in an improved and more robust recognition of characters at the sub-word level. Integrating the HMMs represents another step of the overall OCR trends being currently researched in the literature. The proposed approach exploits the structure of characters in the Arabic language in addition to their extracted features to achieve improved recognition rates. Useful statistical information of the Arabic language is initially extracted and then used to estimate the probabilistic parameters of the mathematical HMM. A new custom implementation of the HMM is developed in this study, where the transition matrix is built based on the collected large corpus, and the emission matrix is built based on the results obtained via the extracted character features. The recognition process is triggered using the Viterbi algorithm which employs the most probable sequence of sub-words. The model was implemented to recognize the sub-word unit of Arabic text raising the recognition rate from being linked to the worst recognition rate for any character to the overall structure of the Arabic language. Numerical results show that there is a potentially large recognition improvement by using the proposed algorithms.

  8. Optical character recognition of handwritten Arabic using hidden Markov models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aulama, Mohannad M.; Natsheh, Asem M.; Abandah, Gheith A.

    2011-01-01

    The problem of optical character recognition (OCR) of handwritten Arabic has not received a satisfactory solution yet. In this paper, an Arabic OCR algorithm is developed based on Hidden Markov Models (HMMs) combined with the Viterbi algorithm, which results in an improved and more robust recognition of characters at the sub-word level. Integrating the HMMs represents another step of the overall OCR trends being currently researched in the literature. The proposed approach exploits the structure of characters in the Arabic language in addition to their extracted features to achieve improved recognition rates. Useful statistical information of the Arabic language ismore » initially extracted and then used to estimate the probabilistic parameters of the mathematical HMM. A new custom implementation of the HMM is developed in this study, where the transition matrix is built based on the collected large corpus, and the emission matrix is built based on the results obtained via the extracted character features. The recognition process is triggered using the Viterbi algorithm which employs the most probable sequence of sub-words. The model was implemented to recognize the sub-word unit of Arabic text raising the recognition rate from being linked to the worst recognition rate for any character to the overall structure of the Arabic language. Numerical results show that there is a potentially large recognition improvement by using the proposed algorithms.« less

  9. Testing the impact of morphological rate heterogeneity on ancestral state reconstruction of five floral traits in angiosperms.

    PubMed

    Reyes, Elisabeth; Nadot, Sophie; von Balthazar, Maria; Schönenberger, Jürg; Sauquet, Hervé

    2018-06-21

    Ancestral state reconstruction is an important tool to study morphological evolution and often involves estimating transition rates among character states. However, various factors, including taxonomic scale and sampling density, may impact transition rate estimation and indirectly also the probability of the state at a given node. Here, we test the influence of rate heterogeneity using maximum likelihood methods on five binary perianth characters, optimized on a phylogenetic tree of angiosperms including 1230 species sampled from all families. We compare the states reconstructed by an equal-rate (Mk1) and a two-rate model (Mk2) fitted either with a single set of rates for the whole tree or as a partitioned model, allowing for different rates on five partitions of the tree. We find strong signal for rate heterogeneity among the five subdivisions for all five characters, but little overall impact of the choice of model on reconstructed ancestral states, which indicates that most of our inferred ancestral states are the same whether heterogeneity is accounted for or not.

  10. The Role of Character in the Hiring Process: A Pilot Study Survey of College Seniors' Potential Employers

    ERIC Educational Resources Information Center

    Firmin, Michael; Proemmel, Elizabeth; McDivitt, Sarah; Evens, Jennifer; Gibbs, Lew

    2009-01-01

    We surveyed 31 prospective employers (65% response rate) regarding their views on character as part of the employment selection process. The results showed character qualities superordinate, relative to skills that prospective employees bring to potential jobs. We discuss survey results in light of business educators' responsibility for helping…

  11. Correcting for sequencing error in maximum likelihood phylogeny inference.

    PubMed

    Kuhner, Mary K; McGill, James

    2014-11-04

    Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.

  12. Error-Correcting Parsing for Syntactic Pattern Recognition

    DTIC Science & Technology

    1977-08-01

    1971. 55. Slromoney, G., Slromoney, R., and K. Krlthlvasan, "Abstract Families of Matrices and Picture Langauges," Computer Graphic and Image...T112 111X1 121 Tine USLO FOR LINXirjG A THtt .186 SEC "INPUT CHARACTER IS A DISTANCE PORN N0*flAL A IS_ 3 TINE USED FOX PARSING S.l&l SEC

  13. Development and validity of a method for the evaluation of printed education material

    PubMed Central

    Castro, Mauro Silveira; Pilger, Diogo; Fuchs, Flávio Danni; Ferreira, Maria Beatriz Cardoso

    Objectives To develop and study the validity of an instrument for evaluation of Printed Education Materials (PEM); to evaluate the use of acceptability indices; to identify possible influences of professional aspects. Methods An instrument for PEM evaluation was developed which included tree steps: domain identification, item generation and instrument design. A reading to easy PEM was developed for education of patient with systemic hypertension and its treatment with hydrochlorothiazide. Construct validity was measured based on previously established errors purposively introduced into the PEM, which served as extreme groups. An acceptability index was applied taking into account the rate of professionals who should approve each item. Participants were 10 physicians (9 men) and 5 nurses (all women). Results Many professionals identified intentional errors of crude character. Few participants identified errors that needed more careful evaluation, and no one detected the intentional error that required literature analysis. Physicians considered as acceptable 95.8% of the items of the PEM, and nurses 29.2%. The differences between the scoring were statistically significant in 27% of the items. In the overall evaluation, 66.6% were considered as acceptable. The analysis of each item revealed a behavioral pattern for each professional group. Conclusions The use of instruments for evaluation of printed education materials is required and may improve the quality of the PEM available for the patients. Not always are the acceptability indices totally correct or represent high quality of information. The professional experience, the practice pattern, and perhaps the gendre of the reviewers may influence their evaluation. An analysis of the PEM by professionals in communication, in drug information, and patients should be carried out to improve the quality of the proposed material. PMID:25214924

  14. OCR enhancement through neighbor embedding and fast approximate nearest neighbors

    NASA Astrophysics Data System (ADS)

    Smith, D. C.

    2012-10-01

    Generic optical character recognition (OCR) engines often perform very poorly in transcribing scanned low resolution (LR) text documents. To improve OCR performance, we apply the Neighbor Embedding (NE) single-image super-resolution (SISR) technique to LR scanned text documents to obtain high resolution (HR) versions, which we subsequently process with OCR. For comparison, we repeat this procedure using bicubic interpolation (BI). We demonstrate that mean-square errors (MSE) in NE HR estimates do not increase substantially when NE is trained in one Latin font style and tested in another, provided both styles belong to the same font category (serif or sans serif). This is very important in practice, since for each font size, the number of training sets required for each category may be reduced from dozens to just one. We also incorporate randomized k-d trees into our NE implementation to perform approximate nearest neighbor search, and obtain a 1000x speed up of our original NE implementation, with negligible MSE degradation. This acceleration also made it practical to combine all of our size-specific NE Latin models into a single Universal Latin Model (ULM). The ULM eliminates the need to determine the unknown font category and size of an input LR text document and match it to an appropriate model, a very challenging task, since the dpi (pixels per inch) of the input LR image is generally unknown. Our experiments show that OCR character error rates (CER) were over 90% when we applied the Tesseract OCR engine to LR text documents (scanned at 75 dpi and 100 dpi) in the 6-10 pt range. By contrast, using k-d trees and the ULM, CER after NE preprocessing averaged less than 7% at 3x (100 dpi LR scanning) and 4x (75 dpi LR scanning) magnification, over an order of magnitude improvement. Moreover, CER after NE preprocessing was more that 6 times lower on average than after BI preprocessing.

  15. The Impact of a Modified Repeated-Reading Strategy Paired with Optical Character Recognition on the Reading Rates of Students with Visual Impairments

    ERIC Educational Resources Information Center

    Pattillo, Suzan Trefry; Heller, Kathryn Wolf; Smith, Maureen

    2004-01-01

    The repeated-reading strategy and optical character recognition were paired to demonstrate a functional relationship between the combined strategies and two factors: the reading rates of students with visual impairments and the students' self-perceptions, or attitudes, toward reading. The results indicated that all five students increased their…

  16. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    NASA Astrophysics Data System (ADS)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  17. Character context: a shape descriptor for Arabic handwriting recognition

    NASA Astrophysics Data System (ADS)

    Mudhsh, Mohammed; Almodfer, Rolla; Duan, Pengfei; Xiong, Shengwu

    2017-11-01

    In the handwriting recognition field, designing good descriptors are substantial to obtain rich information of the data. However, the handwriting recognition research of a good descriptor is still an open issue due to unlimited variation in human handwriting. We introduce a "character context descriptor" that efficiently dealt with the structural characteristics of Arabic handwritten characters. First, the character image is smoothed and normalized, then the character context descriptor of 32 feature bins is built based on the proposed "distance function." Finally, a multilayer perceptron with regularization is used as a classifier. On experimentation with a handwritten Arabic characters database, the proposed method achieved a state-of-the-art performance with recognition rate equal to 98.93% and 99.06% for the 66 and 24 classes, respectively.

  18. Quantitative genetics of ultrasonic advertisement signalling in the lesser waxmoth Achroia grisella (Lepidoptera: pyralidae).

    PubMed

    Collins, R D; Jang, Y; Reinhold, K; Greenfield, M D

    1999-12-01

    Males of the lesser waxmoth Achroia grisella (Lepidoptera: Pyralidae) produce ultrasonic advertisement signals attractive to females within several metres. Previous studies showed that females prefer male signals that are louder, delivered at a faster rate, and have a greater asynchrony between pulses produced by the right and left wings. These three signal characters vary considerably within populations but are repeatable within individuals. Breeding experiments employing half-sib designs were conducted on both collectively and individually reared moths to determine genetic variance within and covariance among these signal characters. Heritabilities of all signal characters were significant among collectively reared moths. Heritabilities for signal rate and right-left wing asynchrony interval were not significant, however, among individually reared moths, suggesting the presence of significant nonadditive genetic variance or common environmental variation. Development time was also significantly heritable, but only under individual rearing. The only significant genetic correlation was between signal rate and length of the right-left wing asynchrony and this was negative. Our findings on heritability of signal characters are consistent with a coevolutionary sexual selection mechanism, but the absence of signal x development genetic correlation fails to support specifically a good-genes mechanism. The variation in heritability among conditions suggests that environmental variance may be high, and may render selection on signal characters by female choice ineffective. Thus, additive genetic variance for these characters may be maintained in the presence of directional female choice.

  19. Perceptions of Americans and the Iraq Invasion: Implications for Understanding National Character Stereotypes

    PubMed Central

    Terracciano, Antonio; McCrae, Robert R.

    2008-01-01

    This study examines perceptions of the “typical American” from 49 cultures around the world. Contrary to the ethnocentric bias hypothesis, we found strong agreement between in-group and out-group ratings on the American profile (assertive, open-minded, but antagonistic); Americans in fact had a somewhat less desirable view of Americans than did others. Within cultures, in-group ratings were not systematically more favorable than out-group ratings. The Iraq invasion had a slight negative effect on perceptions of the typical American, but people around the world seem to draw a clear distinction between U.S. foreign policy and the character of the American people. National character stereotypes appear to have a variety of sources and to be perpetuated by both cognitive mechanisms and socio-cultural forces. PMID:18618011

  20. Dynamic MR imaging of soft tissue tumors with assessment of the rate and character of lesion enhancement.

    PubMed

    Tacikowska, Małgorzata

    2002-02-01

    The aim of this study was to analyze the diagnostic usefulness of dynamic MRI with determination of the coefficient of enhancement rate and the character of tumor enhancement, and to assess both parameters in the differentiation of malignant lesions. The material consisted of 45 patients (30 sarcomas, 15 non-malignant lesions), age 16-64 years. MRI was done using an Elscint 2T unit, gradient echo techniques, apex angle 80 degrees. The repetition time (TR) was 80-200 ms, the echo time (TE) was 2-6 ms, 1 excitation; the acquisition time (TA) was 70-80 ms. The coefficient of tissue enhancement rate was calculated in the region of interest, and expressed as percent per second (erc%/s). The limit value of erc%/s was determined. The sensitivity and specificity of MRI were calculated in the differentiation of malignant tumors. The method of contrast filling of the tumors was assessed in successive phases after administration of gadolinium Gd-DTPA. Dynamic MRI with determination of the index of tumor enhancement rate is highly sensitive (93%) and specific (73%) in the differentiation of malignant and benign lesions. The usefulness of the assessment of tumor enhancement character was not confirmed, since the sensitivity and specificity were 73% and 33%. Dynamic MRI with determination of erc%/s and tumor enhancement character is highly sensitive (93%) and specific (87%). Dynamic MRI with determination of erc%/s and tumor enhancement character is the best method for differential diagnosis.

  1. Quantitative computed tomography (QCT) as a radiology reporting tool by using optical character recognition (OCR) and macro program.

    PubMed

    Lee, Young Han; Song, Ho-Taek; Suh, Jin-Suck

    2012-12-01

    The objectives are (1) to introduce a new concept of making a quantitative computed tomography (QCT) reporting system by using optical character recognition (OCR) and macro program and (2) to illustrate the practical usages of the QCT reporting system in radiology reading environment. This reporting system was created as a development tool by using an open-source OCR software and an open-source macro program. The main module was designed for OCR to report QCT images in radiology reading process. The principal processes are as follows: (1) to save a QCT report as a graphic file, (2) to recognize the characters from an image as a text, (3) to extract the T scores from the text, (4) to perform error correction, (5) to reformat the values into QCT radiology reporting template, and (6) to paste the reports into the electronic medical record (EMR) or picture archiving and communicating system (PACS). The accuracy test of OCR was performed on randomly selected QCTs. QCT as a radiology reporting tool successfully acted as OCR of QCT. The diagnosis of normal, osteopenia, or osteoporosis is also determined. Error correction of OCR is done with AutoHotkey-coded module. The results of T scores of femoral neck and lumbar vertebrae had an accuracy of 100 and 95.4 %, respectively. A convenient QCT reporting system could be established by utilizing open-source OCR software and open-source macro program. This method can be easily adapted for other QCT applications and PACS/EMR.

  2. a Discussion about Effective Ways of Basic Resident Register on GIS

    NASA Astrophysics Data System (ADS)

    Oku, Naoya; Nonaka, Yasuaki; Ito, Yutaka

    2016-06-01

    In Japan, each municipality keeps a database of every resident's name, address, gender and date of birth called the Basic Resident Register. If the address information in the register is converted into coordinates by geocoding, it can be plotted as point data on a map. This would enable prompt evacuation from disaster, analysis of distribution of residents, integrating statistics and so on. Further, it can be used for not only analysis of the current situation but also future planning. However, the geographic information system (GIS) incorporating the Basic Resident Register is not widely used in Japan because of the following problems: - Geocoding In order to plot address point data, it is necessary to match the Basic Resident Register and the address dictionary by using the address as a key. The information in the Basic Resident Register does not always match the actual addresses. As the register is based on applications made by residents, the information is prone to errors, such as incorrect Kanji characters. - Security policy on personal information In the register, the address of a resident is linked with his/her name and date of birth. If the information in the Basic Resident Register were to be leaked, it could be used for malicious purposes. This paper proposes solutions to the above problems. The suitable solutions for the problems depend on the purpose of use, thus it is important that the purpose should be defined and a suitable way of the application for each purpose should be chosen. In this paper, we mainly focus on the specific purpose of use: to analyse the distribution of the residents. We provide two solutions to improve the matching rate in geocoding. First, regarding errors in Kanji characters, a correction list of possible errors should be compiled in advance. Second, some sort of analyses such as distribution of residents may not require exactly correct position for the address point. Therefore we set the matching level in order: prefecture, city, town, city-block, house-code, house, and decided to accept up to city-block level for the matching. Moreover, in terms of security policy on personal information, some part of information may not be needed for the distribution analysis. For example, the personal information like resident's name should be excluded from the attribute of address point in order to secure the safety operation of the system.

  3. Studies on the Written Characters Orientation and Its Influence on Digit Reversal by Children

    ERIC Educational Resources Information Center

    Fischer, Jean-Paul

    2018-01-01

    Recent research has found that children reverse mainly the left-oriented characters when writing from memory (e.g. they write [iota] and [epsilon] instead of J and 3). In order to obtain an objective definition of the left-orientation of a character, the ratings of the level of left-orientation of all the asymmetrical capital letters and digits by…

  4. Study on the characters of control valve for ammonia injection in selective catalytic reduction (SCR) system of coal-fired power plant

    NASA Astrophysics Data System (ADS)

    Yao, Che; Li, Tao; Zhang, Hong; Zhou, Yanming

    2017-08-01

    In this paper, the characters of two control valves used for ammonia injection in SCR system are discussed. The linear/quadratic character between pressure drop/outlet flow rate and valve opening/dynamic pressure inlet are investigated using computational fluid dynamic (CFD) and response surface analysis (RSA) methods. The results show that the linear character of brake valve is significantly better than butterfly valve, which means that the brake valve is more suitable for ammonia injection adjustment than the butterfly valve.

  5. Adhesive Development for Military Bridging

    DTIC Science & Technology

    1988-11-08

    resins investigated were hydrocarbon epoxy novolacs (HEN XP71756.00 and .01), epoxy phenol novolacs (DEN 438), diglycidyl ether of bisphenol A (Epon 828... Resin Systems for Water Resistant Adhesives IV Results and Discussion A. Resin Blending and Processibility B. First-Level Screening C. Second-Level...and insensitivity to minor processing errors. To accomplish this, base resins with hydrophobic character and curatives with low-temperature reactivity

  6. Homosexuality in TV situation comedies: characters and verbal comments.

    PubMed

    Fouts, Gregory; Inch, Rebecca

    2005-01-01

    A content analysis was conducted on 22 television situation comedies in order to determine the incidence of homosexual characters, their demographics (sex, age and race/ethnicity), and whether they verbally comment about sexual orientation. One episode of each program appearing in early October 2000 was video recorded and analyzed for its contents by trained coders. Only 2% of the 125 central characters were homosexual; thus, homosexuality is significantly under-represented in programs that adolescents and young adults watch compared to actual prevalence rates of homosexuality in North America (10-13%). All the homosexual characters were male and in the 20-35-year-old age group; this indicates that homosexual adolescent viewers have no peer role models with whom to identify. Homosexual characters made significantly more comments about sexual orientation than heterosexual characters. This suggests that television writers/producers present sexual orientation as a significant theme in the lives of homosexual characters.

  7. Relationship Between Temperament and Character Traits, Mood, and Medications in Bipolar I Disorder.

    PubMed

    Chavez, Sergio B; Alvarado, Luis A; Gonzalez, Robert

    2016-01-01

    Bipolar I disorder is an illness causing mood shifts that can result in personality and character trait alterations. The relationship between mood and personality and character traits in bipolar I disorder is unclear at this time. We conducted a study from February 2009 to March 2010 that included 42 subjects with bipolar I disorder, which was confirmed using the Structured Clinical Interview for DSM-IV Axis I Disorders. Mood was assessed via the Young Mania Rating Scale (YMRS) and the 30-item Clinician-rated Inventory of Depressive Symptomatology (IDS-C). Temperament and character traits were assessed via the Temperament and Character Inventory (TCI). Multivariate analysis was used to test relationships between mood and temperament and character traits with the effects of possible cofactors taken into account (eg, age, gender, medications). We noted a positive correlation between YMRS scores and persistence ( P = .046) and a trend toward positive correlation with novelty seeking ( P = .054). There was a positive correlation between higher IDS-C scores and harm avoidance ( P < .001) and a negative correlation with self-directedness scores ( P < .001). Antipsychotic use was positively correlated with the character trait self-directedness ( P = .008), with a trend toward a positive correlation with reward dependence ( P = .056). Lithium was negatively correlated with reward dependence ( P = .047) and self-transcendence ( P = .028), with a trend toward a negative correlation with novelty seeking ( P = .053). The findings of our study suggest that some personality and character traits may vary according to mood state and medications in patients with bipolar I disorder. Prospective and longitudinal studies are required to fully characterize the relationships between personality and character traits and mood state in bipolar I disorder.

  8. Multi-frame knowledge based text enhancement for mobile phone captured videos

    NASA Astrophysics Data System (ADS)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-02-01

    In this study, we explore automated text recognition and enhancement using mobile phone captured videos of store receipts. We propose a method which includes Optical Character Resolution (OCR) enhanced by our proposed Row Based Multiple Frame Integration (RB-MFI), and Knowledge Based Correction (KBC) algorithms. In this method, first, the trained OCR engine is used for recognition; then, the RB-MFI algorithm is applied to the output of the OCR. The RB-MFI algorithm determines and combines the most accurate rows of the text outputs extracted by using OCR from multiple frames of the video. After RB-MFI, KBC algorithm is applied to these rows to correct erroneous characters. Results of the experiments show that the proposed video-based approach which includes the RB-MFI and the KBC algorithm increases the word character recognition rate to 95%, and the character recognition rate to 98%.

  9. Characteristics of number transcoding errors of Chinese- versus English-speaking Alzheimer's disease patients.

    PubMed

    Ting, Simon Kang Seng; Chia, Pei Shi; Kwek, Kevin; Tan, Wilnard; Hameed, Shahul

    2016-10-01

    Number processing disorder is an acquired deficit in mathematical skills commonly observed in Alzheimer's disease (AD), usually as a consequence of neurological dysfunction. Common impairments include syntactic errors (800012 instead of 8012) and intrusion errors (8 thousand and 12 instead of eight thousand and twelve) in number transcoding tasks. This study aimed to understand the characterization of AD-related number processing disorder within an alphabetic language (English) and ideographical language (Chinese), and to investigate the differences between alphabetic and ideographic language processing. Chinese-speaking AD patients were hypothesized to make significantly more intrusion errors than English-speaking ones, due to the ideographical nature of both Chinese characters and Arabic numbers. A simplified number transcoding test derived from EC301 battery was administered to AD patients. Chinese-speaking AD patients made significantly more intrusion errors (p = 0.001) than English speakers. This demonstrates that number processing in an alphabetic language such as English does not function in the same manner as in Chinese. The impaired inhibition capability likely contributes to such observations due to its competitive lexical representation in brain for Chinese speakers.

  10. The role of visual spatial attention in adult developmental dyslexia.

    PubMed

    Collis, Nathan L; Kohnen, Saskia; Kinoshita, Sachiko

    2013-01-01

    The present study investigated the nature of visual spatial attention deficits in adults with developmental dyslexia, using a partial report task with five-letter, digit, and symbol strings. Participants responded by a manual key press to one of nine alternatives, which included other characters in the string, allowing an assessment of position errors as well as intrusion errors. The results showed that the dyslexic adults performed significantly worse than age-matched controls with letter and digit strings but not with symbol strings. Both groups produced W-shaped serial position functions with letter and digit strings. The dyslexics' deficits with letter string stimuli were limited to position errors, specifically at the string-interior positions 2 and 4. These errors correlated with letter transposition reading errors (e.g., reading slat as "salt"), but not with the Rapid Automatized Naming (RAN) task. Overall, these results suggest that the dyslexic adults have a visual spatial attention deficit; however, the deficit does not reflect a reduced span in visual-spatial attention, but a deficit in processing a string of letters in parallel, probably due to difficulty in the coding of letter position.

  11. College students' perceptions of individuals with anorexia and bulimia nervosa.

    PubMed

    Wingfield, Natalie; Kelly, Nichole; Serdar, Kasey; Shivy, Victoria A; Mazzeo, Suzanne E

    2011-05-01

    Eating disorders (EDs) are highly stigmatized conditions. This study explored factors hypothesized to influence this stigmatization including ethnicity, gender, ED subtype, and proposed etiology. Undergraduates (N = 235) read scenarios depicting fictional characters varying on ethnicity, gender ED subtype, and etiology. Participants reported perceptions of each character, and completed the EAT-26 and the Level-of-Contact scale. Characters with BN were viewed as more responsible for their ED and more self-destructive than those with AN, who were viewed as more self-controlled. Characters with a sociocultural etiology were rated as most likely to recover. Characters with a biological etiology were viewed as more likeable than characters with an ambiguous etiology. Characters in the ambiguous group were viewed as more self-destructive, more responsible for their ED, and less self-controlled. Differences in participants' perceptions of the characters also emerged when examining ethnicity and gender. Finally, participants' own ED symptoms and their level of contact with EDs were associated with viewing characters as more similar and self-controlled. Findings highlight the need for increased education about ED etiology and course. Copyright © 2010 Wiley Periodicals, Inc.

  12. Assessment of surface turbulent fluxes using geostationary satellite surface skin temperatures and a mixed layer planetary boundary layer scheme

    NASA Technical Reports Server (NTRS)

    Diak, George R.; Stewart, Tod R.

    1989-01-01

    A method is presented for evaluating the fluxes of sensible and latent heating at the land surface, using satellite-measured surface temperature changes in a composite surface layer-mixed layer representation of the planetary boundary layer. The basic prognostic model is tested by comparison with synoptic station information at sites where surface evaporation climatology is well known. The remote sensing version of the model, using satellite-measured surface temperature changes, is then used to quantify the sharp spatial gradient in surface heating/evaporation across the central United States. An error analysis indicates that perhaps five levels of evaporation are recognizable by these methods and that the chief cause of error is the interaction of errors in the measurement of surface temperature change with errors in the assigment of surface roughness character. Finally, two new potential methods for remote sensing of the land-surface energy balance are suggested which will relay on space-borne instrumentation planned for the 1990s.

  13. The Effect of Realistic Appearance of Virtual Characters in Immersive Environments - Does the Character's Personality Play a Role?

    PubMed

    Zibrek, Katja; Kokkinara, Elena; Mcdonnell, Rachel

    2018-04-01

    Virtual characters that appear almost photo-realistic have been shown to induce negative responses from viewers in traditional media, such as film and video games. This effect, described as the uncanny valley, is the reason why realism is often avoided when the aim is to create an appealing virtual character. In Virtual Reality, there have been few attempts to investigate this phenomenon and the implications of rendering virtual characters with high levels of realism on user enjoyment. In this paper, we conducted a large-scale experiment on over one thousand members of the public in order to gather information on how virtual characters are perceived in interactive virtual reality games. We were particularly interested in whether different render styles (realistic, cartoon, etc.) would directly influence appeal, or if a character's personality was the most important indicator of appeal. We used a number of perceptual metrics such as subjective ratings, proximity, and attribution bias in order to test our hypothesis. Our main result shows that affinity towards virtual characters is a complex interaction between the character's appearance and personality, and that realism is in fact a positive choice for virtual characters in virtual reality.

  14. Developmental Trajectories of Youth Character: A Five-Wave Longitudinal Study of Cub Scouts and Non-Scout Boys.

    PubMed

    Wang, Jun; Ferris, Kaitlyn A; Hershberg, Rachel M; Lerner, Richard M

    2015-12-01

    Youth development programs, such as the Boy Scouts of America, aim to develop positive attributes in youth (e.g., character virtues, prosocial behaviors, and positive civic actions), which are necessary for individuals and societies to flourish. However, few developmental studies have focused on how specific positive attributes develop through participation in programs such as the Boy Scouts of America. As part of the Character and Merit Project, this article examined the developmental trajectories of character and other positive attributes, which are of focal concern of the Boy Scouts of America and the developmental literature. Data were collected from 1398 Scouts (M = 8.59 years, SD = 1.29 years, Range 6.17-11.92 years) and 325 non-Scout boys (M = 9.06 years, SD = 1.43 years, Range 6.20-11.81 years) over five waves of testing across a two-and-half-year period. Latent growth-curve analyses of self-report survey data examined the developmental trajectories of the attributes. Older youth rated themselves lower than younger participants on helpfulness, reverence, thriftiness, and school performance. However, all youth had moderately high self-ratings on all the attributes. Across waves, Scouts' self-ratings increased significantly for cheerfulness, helpfulness, kindness, obedience, trustworthiness, and hopeful future expectations. Non-Scout boys' self-ratings showed no significant change for any attributes except for a significant decrease in religious reverence among non-Scout boys from religious institutions. We discuss implications for positive youth development and for the role of the Boy Scouts of America programming in character development.

  15. Optical analysis of the star-tracker telescope for Gravity Probe

    NASA Technical Reports Server (NTRS)

    Zissa, D. E.

    1984-01-01

    A ray tracing modeling of the star tracker telescope for Gravity Probe was used to predict the character of the output signal and its sensitivity to fabrication errors. In particular, the impact of the optical subsystem on the requirement of 1 milliarc second signal linearity over a + or - 50 milliarc second range was examined. Photomultiplier and solid state detector options were considered. Recommendations are made.

  16. Handwritten character recognition using background analysis

    NASA Astrophysics Data System (ADS)

    Tascini, Guido; Puliti, Paolo; Zingaretti, Primo

    1993-04-01

    The paper describes a low-cost handwritten character recognizer. It is constituted by three modules: the `acquisition' module, the `binarization' module, and the `core' module. The core module can be logically partitioned into six steps: character dilation, character circumscription, region and `profile' analysis, `cut' analysis, decision tree descent, and result validation. Firstly, it reduces the resolution of the binarized regions and detects the minimum rectangle (MR) which encloses the character; the MR partitions the background into regions that surround the character or are enclosed by it, and allows it to define features as `profiles' and `cuts;' a `profile' is the set of vertical or horizontal minimum distances between a side of the MR and the character itself; a `cut' is a vertical or horizontal image segment delimited by the MR. Then, the core module classifies the character by descending along the decision tree on the basis of the analysis of regions around the character, in particular of the `profiles' and `cuts,' and without using context information. Finally, it recognizes the character or reactivates the core module by analyzing validation test results. The recognizer is largely insensible to character discontinuity and is able to detect Arabic numerals and English alphabet capital letters. The recognition rate of a 32 X 32 pixel character is of about 97% after the first iteration, and of over 98% after the second iteration.

  17. Smoking in contemporary American cinema.

    PubMed

    Omidvari, Karan; Lessnau, Klaus; Kim, Jeannie; Mercante, Donald; Weinacker, Ann; Mason, Carol

    2005-08-01

    The true prevalence of smoking among characters portrayed in the movies is unknown. This study examines this prevalence objectively. The top 10 movies on the weekly box office charts were reviewed. Whether or not the top five characters in these movies smoked, was documented. It was determined prior to the start of the study that 300 male characters and 300 female characters were needed to detect any significant difference. A total of 447 movies, composed of 193 movies rated restricted (R) [children < 17 years of age must be accompanied by an adult], 131 movies rated PG13 for parental guidance suggested for children < 13 years of age (PG) and 123 movies rated PG for parental guidance suggested, were examined until the sample size was reached. Smoking prevalence is the same in contemporary American movies and in the general US population (23.3% vs 24.8%, respectively). However, there was more smoking in these movies among men than among women (25.5% vs 20.5%, respectively; p < 0.006), among antagonists than among protagonists (35.7% vs 20.6%, respectively; p < 0.001), lower vs middle vs upper socioeconomic class (SEC) [48.2%, 22.9%, and 10.5%, respectively; p < 0.001], among independent vs studio movies (46.2% vs 18.2%, respectively; p < 0.001); and among R-rated vs PG13-rated vs PG-rated movies (37.3%, 16.2%, and 8.1%, respectively; p < 0.001). In R-rated movies, and in both subcategories of R-rated studio movies and R-rated independent movies, smoking prevalence is higher than in the US population (37.3%, 30.5%, and 50.6% vs 24.8%, respectively; p < 0.001 for all). Additionally, compared to the US population, men, women and lower SEC members smoke more in R-rated movies, R-rated studio movies, and R-rated independent movies. In R-rated movies, antagonists smoke more than protagonists (43.9% vs 35.8%, respectively; p < 0.001), and whites smoke more than nonwhites (38.3% vs 26.4%, respectively; p < 0.001). In R-rated studio movies, antagonists smoke more than protagonists (42.6% vs 26.6%, respectively; p < 0.001), and men smoke more than women (32.0% vs 27.9%, respectively; p = 0.03). In R-rated independent movies, whites smoke more than nonwhites (51.8% vs 40.5%, respectively; p < 0.001). Smoking prevalence is higher in R-rated independent movies than in R-rated studio movies (50.6% vs 30.5%, respectively; p < 0.001). Smoking prevalence is also higher in R-rated independent movies than in R-rated studio movies in subcategories of men (32.0% vs 49.8%, respectively; p < 0.001), women (21.8 vs 51.8%, respectively; p < 0.001), protagonists (26.6% vs 51.6%, respectively; p < 0.001), whites (31.5% vs 51.8%, respectively; p < 0.001), nonwhites (24.7% vs 40.5%, respectively; p < 0.001), and all three SECs. In contemporary American cinema, the smoking prevalence is higher for men, antagonistic characters, lower SEC, independent movies, and R-rated movies. Smoking prevalence is higher than in the general US population in R-rated movies, and in both its subcategories of R-rated studio movies and R-rated independent movies. There is more smoking in R-rated independent movies than in R-rated studio movies. Smoking in contemporary American cinema is associated with male sex, lower SEC, and antagonistic (ie, bad) characters.

  18. Simultaneous Control of Error Rates in fMRI Data Analysis

    PubMed Central

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-01-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730

  19. Assessment of Personality Dimensions in Children and Adolescents with Bipolar Disorder Using the Junior Temperament and Character Inventory

    PubMed Central

    Fonseca, Manoela; Caetano, Sheila C.; Hatch, John P.; Hunter, Kristina; Nicoletti, Mark; Pliszka, Steven R.; Cloninger, C. Robert; Soares, Jair C.

    2009-01-01

    Abstract Objective We compared temperament and character traits in children and adolescents with bipolar disorder (BP) and healthy control (HC) subjects. Method Sixty nine subjects (38 BP and 31 HC), 8–17 years old, were assessed with the Kiddie Schedule for Affective Disorders and Schizophrenia–Present and Lifetime. Temperament and character traits were measured with parent and child versions of the Junior Temperament and Character Inventory. Results BP subjects scored higher on novelty seeking, harm avoidance, and fantasy subscales, and lower on reward dependence, persistence, self-directedness, and cooperativeness compared to HC (all p < 0.007), by child and parent reports. These findings were consistent in both children and adolescents. Higher parent-rated novelty seeking, lower self-directedness, and lower cooperativeness were associated with co-morbid attention-deficit/hyperactivity disorder (ADHD). Lower parent-rated reward dependence was associated with co-morbid conduct disorder, and higher child-rated persistence was associated with co-morbid anxiety. Conclusions These findings support previous reports of differences in temperament in BP children and adolescents and may assist in a greater understating of BP children and adolescents beyond mood symptomatology. PMID:19232019

  20. Perceived bitterness character of beer in relation to hop variety and the impact of hop aroma.

    PubMed

    Oladokun, Olayide; James, Sue; Cowley, Trevor; Dehrmann, Frieda; Smart, Katherine; Hort, Joanne; Cook, David

    2017-09-01

    The impact of hop variety and hop aroma on perceived beer bitterness intensity and character was investigated using analytical and sensory methods. Beers made from malt extract were hopped with 3 distinctive hop varieties (Hersbrucker, East Kent Goldings, Zeus) to achieve equi-bitter levels. A trained sensory panel determined the bitterness character profile of each singly-hopped beer using a novel lexicon. Results showed different bitterness character profiles for each beer, with hop aroma also found to change the hop variety-derived bitterness character profiles of the beer. Rank-rating evaluations further showed the significant effect of hop aroma on selected key bitterness character attributes, by increasing perceived harsh and lingering bitterness, astringency, and bitterness intensity via cross-modal flavour interactions. This study advances understanding of the complexity of beer bitterness perception by demonstrating that hop variety selection and hop aroma both impact significantly on the perceived intensity and character of this key sensory attribute. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Convergent? Minds? Some questions about mental evolution.

    PubMed

    Cartmill, Matt

    2017-06-06

    In investigating convergent minds, we need to be sure that the things we are looking at are both minds and convergent. In determining whether a shared character state represents a convergence between two organisms, we must know the wider distribution and primitive state of that character so that we can map that character and its state transitions onto a phylogenetic tree. When we do this, some apparently primitive shared traits may prove to represent convergent losses of cognitive capacities. To avoid having to talk about the minds of plants and paramecia, we need to go beyond assessments of behaviourally defined cognition to ask questions about mind in the primary sense of the word, defined by the presence of mental events and consciousness. These phenomena depend upon the possession of brains of adequate size and centralized ontogeny and organization. They are probably limited to vertebrates. Recent discoveries suggest that consciousness is adaptively valuable as a late error-detection mechanism in the initiation of action, and point to experimental techniques for assessing its presence or absence in non-human mammals.

  2. Incident reporting: Its role in aviation safety and the acquisition of human error data

    NASA Technical Reports Server (NTRS)

    Reynard, W. D.

    1983-01-01

    The rationale for aviation incident reporting systems is presented and contrasted to some of the shortcomings of accident investigation procedures. The history of the United State's Aviation Safety Reporting System (ASRS) is outlined and the program's character explained. The planning elements that resulted in the ASRS program's voluntary, confidential, and non-punitive design are discussed. Immunity, from enforcement action and misuse of the volunteered data, is explained and evaluated. Report generation techniques and the ASRS data analysis process are described; in addition, examples of the ASRS program's output and accomplishments are detailed. Finally, the value of incident reporting for the acquisition of safety information, particularly human error data, is explored.

  3. Quantitative traits and diversification.

    PubMed

    FitzJohn, Richard G

    2010-12-01

    Quantitative traits have long been hypothesized to affect speciation and extinction rates. For example, smaller body size or increased specialization may be associated with increased rates of diversification. Here, I present a phylogenetic likelihood-based method (quantitative state speciation and extinction [QuaSSE]) that can be used to test such hypotheses using extant character distributions. This approach assumes that diversification follows a birth-death process where speciation and extinction rates may vary with one or more traits that evolve under a diffusion model. Speciation and extinction rates may be arbitrary functions of the character state, allowing much flexibility in testing models of trait-dependent diversification. I test the approach using simulated phylogenies and show that a known relationship between speciation and a quantitative character could be recovered in up to 80% of the cases on large trees (500 species). Consistent with other approaches, detecting shifts in diversification due to differences in extinction rates was harder than when due to differences in speciation rates. Finally, I demonstrate the application of QuaSSE to investigate the correlation between body size and diversification in primates, concluding that clade-specific differences in diversification may be more important than size-dependent diversification in shaping the patterns of diversity within this group.

  4. Finite-time control for nonlinear spacecraft attitude based on terminal sliding mode technique.

    PubMed

    Song, Zhankui; Li, Hongxing; Sun, Kaibiao

    2014-01-01

    In this paper, a fast terminal sliding mode control (FTSMC) scheme with double closed loops is proposed for the spacecraft attitude control. The FTSMC laws are included both in an inner control loop and an outer control loop. Firstly, a fast terminal sliding surface (FTSS) is constructed, which can drive the inner loop tracking-error and the outer loop tracking-error on the FTSS to converge to zero in finite time. Secondly, FTSMC strategy is designed by using Lyaponov's method for ensuring the occurrence of the sliding motion in finite time, which can hold the character of fast transient response and improve the tracking accuracy. It is proved that FTSMC can guarantee the convergence of tracking-error in both approaching and sliding mode surface. Finally, simulation results demonstrate the effectiveness of the proposed control scheme. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Performance and Evaluation of the Global Modeling and Assimilation Office Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki; Errico, R. M.; Carvalho, D.

    2018-01-01

    The National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO) has spent more than a decade developing and implementing a global Observing System Simulation Experiment framework for use in evaluting both new observation types as well as the behavior of data assimilation systems. The NASA/GMAO OSSE has constantly evolved to relect changes in the Gridpoint Statistical Interpolation data assimiation system, the Global Earth Observing System model, version 5 (GEOS-5), and the real world observational network. Software and observational datasets for the GMAO OSSE are publicly available, along with a technical report. Substantial modifications have recently been made to the NASA/GMAO OSSE framework, including the character of synthetic observation errors, new instrument types, and more sophisticated atmospheric wind vectors. These improvements will be described, along with the overall performance of the current OSSE. Lessons learned from investigations into correlated errors and model error will be discussed.

  6. Validity of the growth model of the 'computerized visual perception assessment tool for Chinese characters structures'.

    PubMed

    Wu, Huey-Min; Li, Cheng-Hsaun; Kuo, Bor-Chen; Yang, Yu-Mao; Lin, Chin-Kai; Wan, Wei-Hsiang

    2017-08-01

    Morphological awareness is the foundation for the important developmental skills involved with vocabulary, as well as understanding the meaning of words, orthographic knowledge, reading, and writing. Visual perception of space and radicals in two-dimensional positions of Chinese characters' morphology is very important in identifying Chinese characters. The important predictive variables of special and visual perception in Chinese characters identification were investigated in the growth model in this research. The assessment tool is the "Computerized Visual Perception Assessment Tool for Chinese Characters Structures" developed by this study. There are two constructs, basic stroke and character structure. In the basic stroke, there are three subtests of one, two, and more than three strokes. In the character structure, there are three subtests of single-component character, horizontal-compound character, and vertical-compound character. This study used purposive sampling. In the first year, 551 children 4-6 years old participated in the study and were monitored for one year. In the second year, 388 children remained in the study and the successful follow-up rate was 70.4%. This study used a two-wave cross-lagged panel design to validate the growth model of the basic stroke and the character structure. There was significant correlation of the basic stroke and the character structure at different time points. The abilities in the basic stroke and in the character structure steadily developed over time for preschool children. Children's knowledge of the basic stroke effectively predicted their knowledge of the basic stroke and the character structure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo

    1986-01-01

    A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  8. Mathematical morphology-based shape feature analysis for Chinese character recognition systems

    NASA Astrophysics Data System (ADS)

    Pai, Tun-Wen; Shyu, Keh-Hwa; Chen, Ling-Fan; Tai, Gwo-Chin

    1995-04-01

    This paper proposes an efficient technique of shape feature extraction based on the application of mathematical morphology theory. A new shape complexity index for preclassification of machine printed Chinese Character Recognition (CCR) is also proposed. For characters represented in different fonts/sizes or in a low resolution environment, a more stable local feature such as shape structure is preferred for character recognition. Morphological valley extraction filters are applied to extract the protrusive strokes from four sides of an input Chinese character. The number of extracted local strokes reflects the shape complexity of each side. These shape features of characters are encoded as corresponding shape complexity indices. Based on the shape complexity index, data base is able to be classified into 16 groups prior to recognition procedures. The performance of associating with shape feature analysis reclaims several characters from misrecognized character sets and results in an average of 3.3% improvement of recognition rate from an existing recognition system. In addition to enhance the recognition performance, the extracted stroke information can be further analyzed and classified its own stroke type. Therefore, the combination of extracted strokes from each side provides a means for data base clustering based on radical or subword components. It is one of the best solutions for recognizing high complexity characters such as Chinese characters which are divided into more than 200 different categories and consist more than 13,000 characters.

  9. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  10. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  11. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  12. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  13. Being a Victim of Medical Error in Brazil: An (Un)Real Dilemma

    PubMed Central

    Mendonça, Vitor Silva; Custódio, Eda Marconi

    2016-01-01

    Medical error stems from inadequate professional conduct that is capable of producing harm to life or exacerbating the health of another, whether through act or omission. This situation has become increasingly common in Brazil and worldwide. In this study, the aim was to understand what being the victim of medical error is like and to investigate the circumstances imposed on this condition of victims in Brazil. A semi-structured interview was conducted with twelve people who had gone through situations of medical error in their lives, creating a space for narratives of their experiences and deep reflection on the phenomenon. The concept of medical error has a negative connotation, often being associated with the incompetence of a medical professional. Medical error in Brazil is demonstrated by low-quality professional performance and represents the current reality of the country because of the common lack of respect and consideration for patients. Victims often remark on their loss of identity, as their social functions have been interrupted and they do not expect to regain such. It was found, however, little assumption of error in the involved doctors’ discourses and attitudes, which felt a need to judge the medical conduct in an attempt to assert their rights. Medical error in Brazil presents a punitive character and is little discussed in medical and scientific circles. The stigma of medical error is closely connected to the value and cultural judgments of the country, making it difficult to accept, both by victims and professionals. PMID:27403461

  14. A maximally stable extremal region based scene text localization method

    NASA Astrophysics Data System (ADS)

    Xiao, Chengqiu; Ji, Lixin; Gao, Chao; Li, Shaomei

    2015-07-01

    Text localization in natural scene images is an important prerequisite for many content-based image analysis tasks. This paper proposes a novel text localization algorithm. Firstly, a fast pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSER) as basic character candidates. Secondly, these candidates are filtered by using the properties of fitting ellipse and the distribution properties of characters to exclude most non-characters. Finally, a new extremal regions projection merging algorithm is designed to group character candidates into words. Experimental results show that the proposed method has an advantage in speed and achieve relatively high precision and recall rates than the latest published algorithms.

  15. An educational and audit tool to reduce prescribing error in intensive care.

    PubMed

    Thomas, A N; Boxall, E M; Laha, S K; Day, A J; Grundy, D

    2008-10-01

    To reduce prescribing errors in an intensive care unit by providing prescriber education in tutorials, ward-based teaching and feedback in 3-monthly cycles with each new group of trainee medical staff. Prescribing audits were conducted three times in each 3-month cycle, once pretraining, once post-training and a final audit after 6 weeks. The audit information was fed back to prescribers with their correct prescribing rates, rates for individual error types and total error rates together with anonymised information about other prescribers' error rates. The percentage of prescriptions with errors decreased over each 3-month cycle (pretraining 25%, 19%, (one missing data point), post-training 23%, 6%, 11%, final audit 7%, 3%, 5% (p<0.0005)). The total number of prescriptions and error rates varied widely between trainees (data collection one; cycle two: range of prescriptions written: 1-61, median 18; error rate: 0-100%; median: 15%). Prescriber education and feedback reduce manual prescribing errors in intensive care.

  16. A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.

    PubMed

    Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema

    2016-01-01

    A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.

  17. Data Analysis & Statistical Methods for Command File Errors

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  18. Selling Gender: Associations of Box Art Representation of Female Characters With Sales for Teen- and Mature-rated Video Games.

    PubMed

    Near, Christopher E

    2013-02-01

    Content analysis of video games has consistently shown that women are portrayed much less frequently than men and in subordinate roles, often in "hypersexualized" ways. However, the relationship between portrayal of female characters and videogame sales has not previously been studied. In order to assess the cultural influence of video games on players, it is important to weight differently those games seen by the majority of players (in the millions), rather than a random sample of all games, many of which are seen by only a few thousand people. Box art adorning the front of video game boxes is a form of advertising seen by most game customers prior to purchase and should therefore predict sales if indeed particular depictions of female and male characters influence sales. Using a sample of 399 box art cases from games with ESRB ratings of Teen or Mature released in the US during the period of 2005 through 2010, this study shows that sales were positively related to sexualization of non-central female characters among cases with women present. In contrast, sales were negatively related to the presence of any central female characters (sexualized or non-sexualized) or the presence of female characters without male characters present. These findings suggest there is an economic motive for the marginalization and sexualization of women in video game box art, and that there is greater audience exposure to these stereotypical depictions than to alternative depictions because of their positive relationship to sales.

  19. Selling Gender: Associations of Box Art Representation of Female Characters With Sales for Teen- and Mature-rated Video Games

    PubMed Central

    Near, Christopher E.

    2012-01-01

    Content analysis of video games has consistently shown that women are portrayed much less frequently than men and in subordinate roles, often in “hypersexualized” ways. However, the relationship between portrayal of female characters and videogame sales has not previously been studied. In order to assess the cultural influence of video games on players, it is important to weight differently those games seen by the majority of players (in the millions), rather than a random sample of all games, many of which are seen by only a few thousand people. Box art adorning the front of video game boxes is a form of advertising seen by most game customers prior to purchase and should therefore predict sales if indeed particular depictions of female and male characters influence sales. Using a sample of 399 box art cases from games with ESRB ratings of Teen or Mature released in the US during the period of 2005 through 2010, this study shows that sales were positively related to sexualization of non-central female characters among cases with women present. In contrast, sales were negatively related to the presence of any central female characters (sexualized or non-sexualized) or the presence of female characters without male characters present. These findings suggest there is an economic motive for the marginalization and sexualization of women in video game box art, and that there is greater audience exposure to these stereotypical depictions than to alternative depictions because of their positive relationship to sales. PMID:23467816

  20. Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals

    NASA Astrophysics Data System (ADS)

    Goswami, S.; Flury, J.

    2016-12-01

    In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.

  1. Interaction of attentional and motor control processes in handwriting.

    PubMed

    Brown, T L; Donnenwirth, E E

    1990-01-01

    The interaction between attentional capacity, motor control processes, and strategic adaptations to changing task demands was investigated in handwriting, a continuous (rather than discrete) skilled performance. Twenty-four subjects completed 12 two-minute handwriting samples under instructions stressing speeded handwriting, normal handwriting, or highly legible handwriting. For half of the writing samples, a concurrent auditory monitoring task was imposed. Subjects copied either familiar (English) or unfamiliar (Latin) passages. Writing speed, legibility ratings, errors in writing and in the secondary auditory task, and a derived measure of the average number of characters held in short-term memory during each sample ("planning unit size") were the dependent variables. The results indicated that the ability to adapt to instructions stressing speed or legibility was substantially constrained by the concurrent listening task and by text familiarity. Interactions between instructions, task concurrence, and text familiarity in the legibility ratings, combined with further analyses of planning unit size, indicated that information throughput from temporary storage mechanisms to motor processes mediated the loss of flexibility effect. Overall, the results suggest that strategic adaptations of a skilled performance to changing task circumstances are sensitive to concurrent attentional demands and that departures from "normal" or "modal" performance require attention.

  2. Morphology informed by phylogeny reveals unexpected patterns of species differentiation in the aquatic moss Rhynchostegium riparioides s.l.

    PubMed

    Hutsemékers, Virginie; Vieira, Cristiana C; Ros, Rosa María; Huttunen, Sanna; Vanderpoorten, Alain

    2012-02-01

    Bryophyte floras typically exhibit extremely low levels of endemism. The interpretation, that this might reflect taxonomic shortcomings, is tested here for the Macaronesian flora, using the moss species complex of Rhynchostegium riparioides as a model. The deep polyphyly of R. riparioides across its distribution range reveals active differentiation that better corresponds to geographic than morphological differences. Morphometric analyses are, in fact, blurred by a size gradient that accounts for 80% of the variation observed among gametophytic traits. The lack of endemic diversification observed in R. riparioides in Macaronesia weakens the idea that the low rates of endemism observed in the Macaronesian bryophyte flora might solely be explained by taxonomic shortcomings. To the reverse, the striking polyphyly of North American and European lineages of R. riparioides suggests that the similarity between the floras of these continents has been over-emphasized. Discriminant analyses point to the existence of morphological discontinuities among the lineages resolved by the molecular phylogeny. The global rate of error associated to species identification based on morphology (0.23) indicates, however, that intergradation of shape and size characters among species in the group challenges their identification. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. An alternative method for centrifugal compressor loading factor modelling

    NASA Astrophysics Data System (ADS)

    Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.

    2017-08-01

    The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.

  4. Depiction of alcohol, tobacco, and other substances in G-rated animated feature films.

    PubMed

    Thompson, K M; Yokota, F

    2001-06-01

    To quantify and characterize the depiction of alcohol, tobacco, and other substances in G-rated animated feature films. The content of all G-rated animated feature films released in theaters between 1937 and 2000, recorded in English, and available on videocassette in the United States by October 31, 2000, was reviewed for portrayals of alcohol, tobacco, and other substances and their use. Duration of scenes depicting alcohol, tobacco, or other substances; type of characters using them (good, neutral, or bad); and correlation of amount and type used with character type and movie type were evaluated. Of the 81 films reviewed, 38 films (47%) showed alcohol use (mean exposure: 42 seconds per film; range: 2 seconds to 2.9 minutes) and 35 films (43%) showed tobacco use (mean exposure: 2.1 minutes per film; range: 2 seconds to 10.5 minutes). Analysis of time trends showed a significant decrease in both tobacco and alcohol use over time (both corrected for total screen duration and uncorrected.) No films showed the use of illicit drugs, although 3 films showed characters consuming a substance that transfigured them and 2 films showed characters injected with a drug. Analysis of the correlation of alcohol and tobacco depiction revealed several scenes in which alcohol and tobacco were shown in use in the same scene and that bar scenes in these movies depict a significant amount of drinking, smoking, and violence. Three films contained a message that a character should stop smoking but none contained messages about restricting consumption of alcohol. The depiction of alcohol and tobacco use in G-rated animated films seems to be decreasing over time. Nonetheless, parents should be aware that nearly half of the G-rated animated feature films available on videocassette show alcohol and tobacco use as normative behavior and do not convey the long-term consequences of this use.

  5. Cursor Control Device Test Battery

    NASA Technical Reports Server (NTRS)

    Holden, Kritina; Sandor, Aniko; Pace, John; Thompson, Shelby

    2013-01-01

    The test battery was developed to provide a standard procedure for cursor control device evaluation. The software was built in Visual Basic and consists of nine tasks and a main menu that integrates the set-up of the tasks. The tasks can be used individually, or in a series defined in the main menu. Task 1, the Unidirectional Pointing Task, tests the speed and accuracy of clicking on targets. Two rectangles with an adjustable width and adjustable center- to-center distance are presented. The task is to click back and forth between the two rectangles. Clicks outside of the rectangles are recorded as errors. Task 2, Multidirectional Pointing Task, measures speed and accuracy of clicking on targets approached from different angles. Twenty-five numbered squares of adjustable width are arranged around an adjustable diameter circle. The task is to point and click on the numbered squares (placed on opposite sides of the circle) in consecutive order. Clicks outside of the squares are recorded as errors. Task 3, Unidirectional (horizontal) Dragging Task, is similar to dragging a file into a folder on a computer desktop. Task 3 requires dragging a square of adjustable width from one rectangle and dropping it into another. The width of each rectangle is adjustable, as well as the distance between the two rectangles. Dropping the square outside of the rectangles is recorded as an error. Task 4, Unidirectional Path Following, is similar to Task 3. The task is to drag a square through a tunnel consisting of two lines. The size of the square and the width of the tunnel are adjustable. If the square touches any of the lines, it is counted as an error and the task is restarted. Task 5, Text Selection, involves clicking on a Start button, and then moving directly to the underlined portion of the displayed text and highlighting it. The pointing distance to the text is adjustable, as well as the to-be-selected font size and the underlined character length. If the selection does not include all of the underlined characters, or includes non-underlined characters, it is recorded as an error. Task 6, Multi-size and Multi-distance Pointing, presents the participant with 24 consecutively numbered buttons of different sizes (63 to 163 pixels), and at different distances (60 to 80 pixels) from the Start button. The task is to click on the Start button, and then move directly to, and click on, each numbered target button in consecutive order. Clicks outside of the target area are errors. Task 7, Standard Interface Elements Task, involves interacting with standard interface elements as instructed in written procedures, including: drop-down menus, sliders, text boxes, radio buttons, and check boxes. Task completion time is recorded. In Task 8, a circular track is presented with a disc in it at the top. Track width and disc size are adjustable. The task is to move the disc with circular motion within the path without touching the boundaries of the track. Time and errors are recorded. Task 9 is a discrete task that allows evaluation of discrete cursor control devices that tab from target to target, such as a castle switch. The task is to follow a predefined path and to click on the yellow targets along the path.

  6. Expert system for automatically correcting OCR output

    NASA Astrophysics Data System (ADS)

    Taghva, Kazem; Borsack, Julie; Condit, Allen

    1994-03-01

    This paper describes a new expert system for automatically correcting errors made by optical character recognition (OCR) devices. The system, which we call the post-processing system, is designed to improve the quality of text produced by an OCR device in preparation for subsequent retrieval from an information system. The system is composed of numerous parts: an information retrieval system, an English dictionary, a domain-specific dictionary, and a collection of algorithms and heuristics designed to correct as many OCR errors as possible. For the remaining errors that cannot be corrected, the system passes them on to a user-level editing program. This post-processing system can be viewed as part of a larger system that would streamline the steps of taking a document from its hard copy form to its usable electronic form, or it can be considered a stand alone system for OCR error correction. An earlier version of this system has been used to process approximately 10,000 pages of OCR generated text. Among the OCR errors discovered by this version, about 87% were corrected. We implement numerous new parts of the system, test this new version, and present the results.

  7. Diversification of Angraecum (Orchidaceae, Vandeae) in Madagascar: Revised Phylogeny Reveals Species Accumulation through Time Rather than Rapid Radiation

    PubMed Central

    2016-01-01

    Angraecum is the largest genus of subtribe Angraecinae (Orchidaceae) with about 221 species. Madagascar is the center of the diversity for the genus with ca. 142 species, of which 90% are endemic. The great morphological diversity associated with species diversification in the genus on the island of Madagascar offers valuable insights for macroevolutionary studies. Phylogenies of the Angraecinae have been published but a lack of taxon and character sampling and their limited taxonomic resolution limit their uses for macroevolutionary studies. We present a new phylogeny of Angraecum based on chloroplast sequence data (matk, rps16, trnL), nuclear ribosomal (ITS2) and 39 morphological characters from 194 Angraecinae species of which 69 were newly sampled. Using this phylogeny, we evaluated the monophyly of the sections of Angraecum as defined by Garay and investigated the patterns of species diversification within the genus. We used maximum parsimony and bayesian analyses to generate phylogenetic trees and dated divergence times of the phylogeny. We analyzed diversification patterns within Angraecinae and Angraecum with an emphasis on four floral characters (flower color, flower size, labellum position, spur length) using macroevolutionary models to evaluate which characters or character states are associated with speciation rates, and inferred ancestral states of these characters. The phylogenetic analysis showed the polyphyly of Angraecum sensu lato and of all Angraecum sections except sect. Hadrangis, and that morphology can be consistent with the phylogeny. It appeared that the characters (flower color, flower size, spur length) formerly used by many authors to delineate Angraecum groups were insufficient to do so. However, the newly described character, position of the labellum (uppermost and lowermost), was the main character delimiting clades within a monophyletic Angraecum sensu stricto. This character also appeared to be associated with speciation rates in Angraecum. The macroevolutionary model-based phylogeny failed to detect shifts in diversification that could be associated directly with morphological diversification. Diversification in Angraecum resulted from gradual species accumulation through time rather than from rapid radiation, a diversification pattern often encountered in tropical rain forests. PMID:27669569

  8. Diversification of Angraecum (Orchidaceae, Vandeae) in Madagascar: Revised Phylogeny Reveals Species Accumulation through Time Rather than Rapid Radiation.

    PubMed

    Andriananjamanantsoa, Herinandrianina N; Engberg, Shannon; Louis, Edward E; Brouillet, Luc

    Angraecum is the largest genus of subtribe Angraecinae (Orchidaceae) with about 221 species. Madagascar is the center of the diversity for the genus with ca. 142 species, of which 90% are endemic. The great morphological diversity associated with species diversification in the genus on the island of Madagascar offers valuable insights for macroevolutionary studies. Phylogenies of the Angraecinae have been published but a lack of taxon and character sampling and their limited taxonomic resolution limit their uses for macroevolutionary studies. We present a new phylogeny of Angraecum based on chloroplast sequence data (matk, rps16, trnL), nuclear ribosomal (ITS2) and 39 morphological characters from 194 Angraecinae species of which 69 were newly sampled. Using this phylogeny, we evaluated the monophyly of the sections of Angraecum as defined by Garay and investigated the patterns of species diversification within the genus. We used maximum parsimony and bayesian analyses to generate phylogenetic trees and dated divergence times of the phylogeny. We analyzed diversification patterns within Angraecinae and Angraecum with an emphasis on four floral characters (flower color, flower size, labellum position, spur length) using macroevolutionary models to evaluate which characters or character states are associated with speciation rates, and inferred ancestral states of these characters. The phylogenetic analysis showed the polyphyly of Angraecum sensu lato and of all Angraecum sections except sect. Hadrangis, and that morphology can be consistent with the phylogeny. It appeared that the characters (flower color, flower size, spur length) formerly used by many authors to delineate Angraecum groups were insufficient to do so. However, the newly described character, position of the labellum (uppermost and lowermost), was the main character delimiting clades within a monophyletic Angraecum sensu stricto. This character also appeared to be associated with speciation rates in Angraecum. The macroevolutionary model-based phylogeny failed to detect shifts in diversification that could be associated directly with morphological diversification. Diversification in Angraecum resulted from gradual species accumulation through time rather than from rapid radiation, a diversification pattern often encountered in tropical rain forests.

  9. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  10. Phenotypic polymorphism of Chrysomya albiceps (Wiedemann) (Diptera: Calliphoridae) may lead to species misidentification.

    PubMed

    Grella, Maicon D; Savino, André G; Paulo, Daniel F; Mendes, Felipe M; Azeredo-Espin, Ana M L; Queiroz, Margareth M C; Thyssen, Patricia J; Linhares, Arício X

    2015-01-01

    Species identification is an essential step in the progress and completion of work in several areas of biological knowledge, but it is not a simple process. Due to the close phylogenetic relationship of certain species, morphological characters are not always sufficiently distinguishable. As a result, it is necessary to combine several methods of analysis that contribute to a distinct categorization of taxa. This study aimed to raise diagnostic characters, both morphological and molecular, for the correct identification of species of the genus Chrysomya (Diptera: Calliphoridae) recorded in the New World, which has continuously generated discussion about its taxonomic position over the last century. A clear example of this situation was the first record of Chrysomya rufifacies in Brazilian territory in 2012. However, the morphological polymorphism and genetic variability of Chrysomya albiceps studied here show that both species (C. rufifacies and C. albiceps) share very similar character states, leading to misidentification and subsequent registration error of species present in our territory. This conclusion is demonstrated by the authors, based on a review of the material deposited in major scientific collections in Brazil and subsequent molecular and phylogenetic analysis of these samples. Additionally, we have proposed a new taxonomic key to separate the species of Chrysomya found on the American continent, taking into account a larger number of characters beyond those available in current literature. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Hearing Story Characters' Voices: Auditory Imagery during Reading

    ERIC Educational Resources Information Center

    Gunraj, Danielle N.; Klin, Celia M.

    2012-01-01

    Despite the longstanding belief in an inner voice, there is surprisingly little known about the perceptual features of that voice during text processing. This article asked whether readers infer nonlinguistic phonological features, such as speech rate, associated with a character's speech. Previous evidence for this type of auditory imagery has…

  12. Dissociable neural systems supporting knowledge about human character and appearance in ourselves and others.

    PubMed

    Moran, Joseph M; Lee, Su Mei; Gabrieli, John D E

    2011-09-01

    Functional neuroimaging has identified a neural system comprising posterior cingulate (pCC) and medial prefrontal (mPFC) cortices that appears to mediate self-referential thought. It is unclear whether the two components of this system mediate similar or different psychological processes, and how specific this system is for self relative to others. In an fMRI study, we compared brain responses for evaluation of character (e.g., honest) versus appearance (e.g., svelte) for oneself, one's mother (a close other), and President Bush (a distant other). There was a double dissociation between dorsal mPFC, which was more engaged for character than appearance judgments, and pCC, which was more engaged for appearance than character judgments. A ventral region of mPFC was engaged for judgments involving one's own character and appearance, and one's mother's character, but not her appearance. A follow-up behavioral study indicated that participants rate their own character and appearance, and their mother's character, but not her appearance, as important in their self-concept. This suggests that ventral mPFC activation reflects its role in processing information relevant to the self, but not limited to the self. Thus, specific neural systems mediate specific aspects of thinking about character and appearance in oneself and in others.

  13. Methods for Presenting Braille Characters on a Mobile Device with a Touchscreen and Tactile Feedback.

    PubMed

    Rantala, J; Raisamo, R; Lylykangas, J; Surakka, V; Raisamo, J; Salminen, K; Pakkanen, T; Hippula, A

    2009-01-01

    Three novel interaction methods were designed for reading six-dot Braille characters from the touchscreen of a mobile device. A prototype device with a piezoelectric actuator embedded under the touchscreen was used to create tactile feedback. The three interaction methods, scan, sweep, and rhythm, enabled users to read Braille characters one at a time either by exploring the characters dot by dot or by sensing a rhythmic pattern presented on the screen. The methods were tested with five blind Braille readers as a proof of concept. The results of the first experiment showed that all three methods can be used to convey information as the participants could accurately (91-97 percent) recognize individual characters. In the second experiment the presentation rate of the most efficient and preferred method, the rhythm, was varied. A mean recognition accuracy of 70 percent was found when the speed of presenting a single character was nearly doubled from the first experiment. The results showed that temporal tactile feedback and Braille coding can be used to transmit single-character information while further studies are still needed to evaluate the presentation of serial information, i.e., multiple Braille characters.

  14. A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading

    NASA Astrophysics Data System (ADS)

    Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo

    A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).

  15. Boost OCR accuracy using iVector based system combination approach

    NASA Astrophysics Data System (ADS)

    Peng, Xujun; Cao, Huaigu; Natarajan, Prem

    2015-01-01

    Optical character recognition (OCR) is a challenging task because most existing preprocessing approaches are sensitive to writing style, writing material, noises and image resolution. Thus, a single recognition system cannot address all factors of real document images. In this paper, we describe an approach to combine diverse recognition systems by using iVector based features, which is a newly developed method in the field of speaker verification. Prior to system combination, document images are preprocessed and text line images are extracted with different approaches for each system, where iVector is transformed from a high-dimensional supervector of each text line and is used to predict the accuracy of OCR. We merge hypotheses from multiple recognition systems according to the overlap ratio and the predicted OCR score of text line images. We present evaluation results on an Arabic document database where the proposed method is compared against the single best OCR system using word error rate (WER) metric.

  16. New subspace methods for ATR

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Peng, Jing; Sims, S. Richard F.

    2005-05-01

    In ATR applications, each feature is a convolution of an image with a filter. It is important to use most discriminant features to produce compact representations. We propose two novel subspace methods for dimension reduction to address limitations associated with Fukunaga-Koontz Transform (FKT). The first method, Scatter-FKT, assumes that target is more homogeneous, while clutter can be anything other than target and anywhere. Thus, instead of estimating a clutter covariance matrix, Scatter-FKT computes a clutter scatter matrix that measures the spread of clutter from the target mean. We choose dimensions along which the difference in variation between target and clutter is most pronounced. When the target follows a Gaussian distribution, Scatter-FKT can be viewed as a generalization of FKT. The second method, Optimal Bayesian Subspace, is derived from the optimal Bayesian classifier. It selects dimensions such that the minimum Bayes error rate can be achieved. When both target and clutter follow Gaussian distributions, OBS computes optimal subspace representations. We compare our methods against FKT using character image as well as IR data.

  17. Comprehension by learning-disabled and nondisabled adolescents of personal/social problems presented in text.

    PubMed

    Williams, J P

    1991-01-01

    Four groups of 14-year-olds, differing in reading level, learning disability status, and socioeconomic status, read and retold short problem narratives and answered questions. The pattern of reporting components of the problem schema (goal/obstacles/choices) differed for problems presented with or without a statement of the character's priority for action, suggesting that including priorities adds another level of information to the problem text and changes its macrostructure. Even the poorest readers showed this sensitivity to text structure. Three of the four measures of problem representation (idea units recalled, problem-schema components reported, and error rate) reflected overall reading ability. However, the degree to which extraneous information was incorporated into problem representations did not. Learning-disabled students made more importations, and more implausible importations, than did non-disabled students. Moreover, this pattern was associated with poor problem solving. Only proficient readers showed awareness of the source of the information (text or extratext) on which their predictions were based.

  18. Image edge detection based tool condition monitoring with morphological component analysis.

    PubMed

    Yu, Xiaolong; Lin, Xin; Dai, Yiquan; Zhu, Kunpeng

    2017-07-01

    The measurement and monitoring of tool condition are keys to the product precision in the automated manufacturing. To meet the need, this study proposes a novel tool wear monitoring approach based on the monitored image edge detection. Image edge detection has been a fundamental tool to obtain features of images. This approach extracts the tool edge with morphological component analysis. Through the decomposition of original tool wear image, the approach reduces the influence of texture and noise for edge measurement. Based on the target image sparse representation and edge detection, the approach could accurately extract the tool wear edge with continuous and complete contour, and is convenient in charactering tool conditions. Compared to the celebrated algorithms developed in the literature, this approach improves the integrity and connectivity of edges, and the results have shown that it achieves better geometry accuracy and lower error rate in the estimation of tool conditions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Method for guessing the response of a physical system to an arbitrary input

    DOEpatents

    Wolpert, David H.

    1996-01-01

    Stacked generalization is used to minimize the generalization errors of one or more generalizers acting on a known set of input values and output values representing a physical manifestation and a transformation of that manifestation, e.g., hand-written characters to ASCII characters, spoken speech to computer command, etc. Stacked generalization acts to deduce the biases of the generalizer(s) with respect to a known learning set and then correct for those biases. This deduction proceeds by generalizing in a second space whose inputs are the guesses of the original generalizers when taught with part of the learning set and trying to guess the rest of it, and whose output is the correct guess. Stacked generalization can be used to combine multiple generalizers or to provide a correction to a guess from a single generalizer.

  20. Optimal frame-by-frame result combination strategy for OCR in video stream

    NASA Astrophysics Data System (ADS)

    Bulatov, Konstantin; Lynchenko, Aleksander; Krivtsov, Valeriy

    2018-04-01

    This paper describes the problem of combining classification results of multiple observations of one object. This task can be regarded as a particular case of a decision-making using a combination of experts votes with calculated weights. The accuracy of various methods of combining the classification results depending on different models of input data is investigated on the example of frame-by-frame character recognition in a video stream. Experimentally it is shown that the strategy of choosing a single most competent expert in case of input data without irrelevant observations has an advantage (in this case irrelevant means with character localization and segmentation errors). At the same time this work demonstrates the advantage of combining several most competent experts according to multiplication rule or voting if irrelevant samples are present in the input data.

  1. Effect of bar-code technology on the safety of medication administration.

    PubMed

    Poon, Eric G; Keohane, Carol A; Yoon, Catherine S; Ditmore, Matthew; Bane, Anne; Levtzion-Korach, Osnat; Moniz, Thomas; Rothschild, Jeffrey M; Kachalia, Allen B; Hayes, Judy; Churchill, William W; Lipsitz, Stuart; Whittemore, Anthony D; Bates, David W; Gandhi, Tejal K

    2010-05-06

    Serious medication errors are common in hospitals and often occur during order transcription or administration of medication. To help prevent such errors, technology has been developed to verify medications by incorporating bar-code verification technology within an electronic medication-administration system (bar-code eMAR). We conducted a before-and-after, quasi-experimental study in an academic medical center that was implementing the bar-code eMAR. We assessed rates of errors in order transcription and medication administration on units before and after implementation of the bar-code eMAR. Errors that involved early or late administration of medications were classified as timing errors and all others as nontiming errors. Two clinicians reviewed the errors to determine their potential to harm patients and classified those that could be harmful as potential adverse drug events. We observed 14,041 medication administrations and reviewed 3082 order transcriptions. Observers noted 776 nontiming errors in medication administration on units that did not use the bar-code eMAR (an 11.5% error rate) versus 495 such errors on units that did use it (a 6.8% error rate)--a 41.4% relative reduction in errors (P<0.001). The rate of potential adverse drug events (other than those associated with timing errors) fell from 3.1% without the use of the bar-code eMAR to 1.6% with its use, representing a 50.8% relative reduction (P<0.001). The rate of timing errors in medication administration fell by 27.3% (P<0.001), but the rate of potential adverse drug events associated with timing errors did not change significantly. Transcription errors occurred at a rate of 6.1% on units that did not use the bar-code eMAR but were completely eliminated on units that did use it. Use of the bar-code eMAR substantially reduced the rate of errors in order transcription and in medication administration as well as potential adverse drug events, although it did not eliminate such errors. Our data show that the bar-code eMAR is an important intervention to improve medication safety. (ClinicalTrials.gov number, NCT00243373.) 2010 Massachusetts Medical Society

  2. Development and implementation of a human accuracy program in patient foodservice.

    PubMed

    Eden, S H; Wood, S M; Ptak, K M

    1987-04-01

    For many years, industry has utilized the concept of human error rates to monitor and minimize human errors in the production process. A consistent quality-controlled product increases consumer satisfaction and repeat purchase of product. Administrative dietitians have applied the concepts of using human error rates (the number of errors divided by the number of opportunities for error) at four hospitals, with a total bed capacity of 788, within a tertiary-care medical center. Human error rate was used to monitor and evaluate trayline employee performance and to evaluate layout and tasks of trayline stations, in addition to evaluating employees in patient service areas. Long-term employees initially opposed the error rate system with some hostility and resentment, while newer employees accepted the system. All employees now believe that the constant feedback given by supervisors enhances their self-esteem and productivity. Employee error rates are monitored daily and are used to counsel employees when necessary; they are also utilized during annual performance evaluation. Average daily error rates for a facility staffed by new employees decreased from 7% to an acceptable 3%. In a facility staffed by long-term employees, the error rate increased, reflecting improper error documentation. Patient satisfaction surveys reveal satisfaction, for tray accuracy increased from 88% to 92% in the facility staffed by long-term employees and has remained above the 90% standard in the facility staffed by new employees.

  3. Injury prevention practices as depicted in G and PG rated movies: the sequel

    PubMed Central

    Ramsey, L; Ballesteros, M; Pelletier, A; Wolf, J

    2005-01-01

    Objective: To determine whether the depiction of injury prevention practices in children's movies released during 1998–2002 is different from an earlier study, which found that characters were infrequently depicted practicing recommended safety behaviors. Methods: The top 25 G (general audience) and PG (parental guidance suggested) rated movies per year from 1998–2002 comprised the study sample. Movies or scenes not set in the present day, animated, documentary, or not in English were excluded; fantasy scenes were also excluded. Injury prevention practices of motor vehicle occupants, pedestrians, bicyclists, and boaters were recorded for characters with speaking roles. Results: Compared with the first study, the proportion of scenes with characters wearing safety belts increased (27% v 35%, p<0.01), the proportion of scenes with characters wearing personal flotation devices decreased (17% v 0%, p<0.05), and no improvement was noted in pedestrian behavior or use of bicycle helmets. Conclusions: Despite a modest increase in safety belt usage, appropriate injury prevention practices are still infrequently shown in top grossing G and PG rated movies. The authors recommend that the entertainment industry incorporate safe practices into children's movies. Parents should call attention to the depiction of unsafe behaviors in movies and educate children to follow recommended safety practices. PMID:16326770

  4. Beyond the bucket: testing the effect of experimental design on rate and sequence of decay

    NASA Astrophysics Data System (ADS)

    Gabbott, Sarah; Murdock, Duncan; Purnell, Mark

    2016-04-01

    Experimental decay has revealed the potential for profound biases in our interpretations of exceptionally preserved fossils, with non-random sequences of character loss distorting the position of fossil taxa in phylogenetic trees. By characterising these sequences we can rewind this distortion and make better-informed interpretations of the affinity of enigmatic fossil taxa. Equally, rate of character loss is crucial for estimating the preservation potential of phylogentically informative characters, and revealing the mechanisms of preservation themselves. However, experimental decay has been criticised for poorly modeling 'real' conditions, and dismissed as unsophisticated 'bucket science'. Here we test the effect of a differing experimental parameters on the rate and sequence of decay. By doing so, we can test the assumption that the results of decay experiments are applicable to informing interpretations of exceptionally preserved fossils from diverse preservational settings. The results of our experiments demonstrate the validity of using the sequence of character loss as a phylogenetic tool, and sheds light on the extent to which environment must be considered before making decay-informed interpretations, or reconstructing taphonomic pathways. With careful consideration of experimental design, driven by testable hypotheses, decay experiments are robust and informative - experimental taphonomy needn't kick the bucket just yet.

  5. Role of character strengths in outcome after mild complicated to severe traumatic brain injury: a positive psychology study.

    PubMed

    Hanks, Robin A; Rapport, Lisa J; Waldron-Perrine, Brigid; Millis, Scott R

    2014-11-01

    To examine the effects of character strengths on psychosocial outcomes after mild complicated to severe traumatic brain injury (TBI). Prospective study with consecutive enrollment. A Midwestern rehabilitation hospital. Persons with mild complicated to severe TBI (N=65). Not applicable. Community Integration Measure, Disability Rating Scale, Modified Cumulative Illness Rating Scale, Positive and Negative Affect Schedule, Satisfaction with Life Scale, Values in Action Inventory of Strengths, and Wechsler Test of Adult Reading. Character virtues and strengths were moderately associated with subjective outcomes, such that there were fewer and less strong associations between character virtues/strengths and objective outcomes than subjective outcomes. Specifically, positive attributes were associated with greater life satisfaction and perceived community integration. Fewer and less strong associations were observed for objective well-being; however, character strengths and virtues showed unique value in predicting physical health and disability. Positive affectivity was not meaningfully related to objective outcomes, but it was significantly related to subjective outcomes. In contrast, negative affectivity was related to objective but not subjective outcomes. Given the strength of the associations between positive aspects of character or ways of perceiving the world and positive feelings about one's current life situation, treatments focused on facilitating these virtues and strengths in persons who have experienced TBI may result in better perceived outcomes and potentially subsequently lower comorbidities. Copyright © 2014 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  6. The influence of the structure and culture of medical group practices on prescription drug errors.

    PubMed

    Kralewski, John E; Dowd, Bryan E; Heaton, Alan; Kaissi, Amer

    2005-08-01

    This project was designed to identify the magnitude of prescription drug errors in medical group practices and to explore the influence of the practice structure and culture on those error rates. Seventy-eight practices serving an upper Midwest managed care (Care Plus) plan during 2001 were included in the study. Using Care Plus claims data, prescription drug error rates were calculated at the enrollee level and then were aggregated to the group practice that each enrollee selected to provide and manage their care. Practice structure and culture data were obtained from surveys of the practices. Data were analyzed using multivariate regression. Both the culture and the structure of these group practices appear to influence prescription drug error rates. Seeing more patients per clinic hour, more prescriptions per patient, and being cared for in a rural clinic were all strongly associated with more errors. Conversely, having a case manager program is strongly related to fewer errors in all of our analyses. The culture of the practices clearly influences error rates, but the findings are mixed. Practices with cohesive cultures have lower error rates but, contrary to our hypothesis, cultures that value physician autonomy and individuality also have lower error rates than those with a more organizational orientation. Our study supports the contention that there are a substantial number of prescription drug errors in the ambulatory care sector. Even by the strictest definition, there were about 13 errors per 100 prescriptions for Care Plus patients in these group practices during 2001. Our study demonstrates that the structure of medical group practices influences prescription drug error rates. In some cases, this appears to be a direct relationship, such as the effects of having a case manager program on fewer drug errors, but in other cases the effect appears to be indirect through the improvement of drug prescribing practices. An important aspect of this study is that it provides insights into the relationships of the structure and culture of medical group practices and prescription drug errors and provides direction for future research. Research focused on the factors influencing the high error rates in rural areas and how the interaction of practice structural and cultural attributes influence error rates would add important insights into our findings. For medical practice directors, our data show that they should focus on patient care coordination to reduce errors.

  7. Emergency department discharge prescription errors in an academic medical center

    PubMed Central

    Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.

    2017-01-01

    This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm. PMID:28405061

  8. Remembering gay/lesbian media characters: can Ellen and Will improve attitudes toward homosexuals?.

    PubMed

    Bonds-Raacke, Jennifer M; Cady, Elizabeth T; Schlegel, Rebecca; Harris, Richard J; Firebaugh, Lindsey

    2007-01-01

    The purpose of the current research was twofold. First, a pilot study was conducted in which participants were asked to recall any memorable gay or lesbian television or film character and complete a survey about their perceptions of the character. Results indicated that over two-thirds of heterosexual participants recalled either Ellen or Will, and evaluative ratings for these characters were generally positive. The second purpose of this research was to examine the priming effects of remembering portrayals of homosexual characters in the media. Therefore, an experiment was conducted to directly assess the effects of thinking about either a positive or negative homosexual character on general heterosexuals' attitudes toward gay men and lesbians. Results indicated that those recalling a positive portrayal later showed a more positive attitude toward gay men than those recalling a negative portrayal, and women had a more positive attitude overall than men toward gay men and lesbians. Such findings illustrate the importance of positive role models in entertainment media as potential primes of social attitudes.

  9. Enhancing GADRAS Source Term Inputs for Creation of Synthetic Spectra.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horne, Steven M.; Harding, Lee

    The Gamma Detector Response and Analysis Software (GADRAS) team has enhanced the source term input for the creation of synthetic spectra. These enhancements include the following: allowing users to programmatically provide source information to GADRAS through memory, rather than through a string limited to 256 characters; allowing users to provide their own source decay database information; and updating the default GADRAS decay database to fix errors and include coincident gamma information.

  10. Tobacco and alcohol use in G-rated children's animated films.

    PubMed

    Goldstein, A O; Sobel, R A; Newman, G R

    Tobacco and alcohol use among youth are major public health problems, but the extent to which children are routinely exposed to tobacco and alcohol products in children's films is unknown. To identify the prevalence and characteristics associated with tobacco and alcohol use portrayed in G-rated, animated feature films. Design All G-rated, animated feature films released between 1937 and 1997 by 5 major production companies (Walt Disney Co, MGM/United Artists, Warner Brothers Studios, Universal Studios, and 20th Century Fox) that were available on videotape were reviewed for episodes of tobacco and alcohol use. Presence of tobacco and alcohol use in each film, type of tobacco or alcohol used, duration of use, type of character using substance (bad, neutral, or good), and any associated effects. Of 50 films reviewed, 34 (68%) displayed at least 1 episode of tobacco or alcohol use. Twenty-eight (56%) portrayed 1 or more incidences of tobacco use, including all 7 films released in 1996 and 1997. Twenty-five films (50%) included alcohol use. Smoking was portrayed on screen by 76 characters for more than 45 minutes in duration; alcohol use was portrayed by 63 characters for 27 minutes. Good characters use tobacco and alcohol as frequently as bad characters. Cigars and wine are shown in these films more often than other tobacco or alcohol substances. More than two thirds of animated children's films feature tobacco or alcohol use in story plots without clear verbal messages of any negative long-term health effects associated with use of either substance.

  11. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains

    PubMed Central

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-01-01

    Background Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. Objectives We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Methods Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Results Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Conclusions Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. PMID:27193033

  12. The Role of Honesty and Benevolence in Children's Judgments of Trustworthiness

    ERIC Educational Resources Information Center

    Xu, Fen; Evans, Angela D.; Li, Chunxia; Li, Qinggong; Heyman, Gail; Lee, Kang

    2013-01-01

    The present investigation examined the relation between honesty, benevolence, and trust in children. One hundred and eight 7-, 9-, and 11-year-olds were read four story types in which the character's honesty (honesty or dishonest) was crossed with their intentions (helping or harming). Children rated the story character's honesty, benevolence, and…

  13. Television Sex Roles in the 1980s: Do Viewers' Sex and Sex Role Orientation Change the Picture?

    ERIC Educational Resources Information Center

    Dambrot, Faye H.; And Others

    1988-01-01

    Investigates the viewer perceptions of female and male television characters as a result of viewer sex and sex role orientation, based on the responses of 677 young adults to the Personal Attributes Questionnaire (PAQ). Viewer gender had an impact on the rating of female characters. (FMW)

  14. Typewriting rate as a function of reaction time.

    PubMed

    Hayes, V; Wilson, G D; Schafer, R L

    1977-12-01

    This study was designed to determine the relationship between reaction time and typewriting rate. Subjects were 24 typists ranging in age from 19 to 39 yr. Reaction times (.001 sec) to a light were recorded for each finger and to each alphabetic character and three punctuation marks. Analysis of variance yielded significant differences in reaction time among subjects and fingers. Correlation between typewriting rate and average reaction time to the alphabetic characters and three punctuation marks was --.75. Correlation between typewriting rate and the difference between the reaction time of the hands was --.42. Factors influencing typewriting rate may include reaction time of the fingers, difference between the reaction time of the hands, and reaction time to individual keys on the typewriter. Implications exist for instructional methodology and further research.

  15. Counting OCR errors in typeset text

    NASA Astrophysics Data System (ADS)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  16. Dispensing error rate after implementation of an automated pharmacy carousel system.

    PubMed

    Oswald, Scott; Caldwell, Richard

    2007-07-01

    A study was conducted to determine filling and dispensing error rates before and after the implementation of an automated pharmacy carousel system (APCS). The study was conducted in a 613-bed acute and tertiary care university hospital. Before the implementation of the APCS, filling and dispensing rates were recorded during October through November 2004 and January 2005. Postimplementation data were collected during May through June 2006. Errors were recorded in three areas of pharmacy operations: first-dose or missing medication fill, automated dispensing cabinet fill, and interdepartmental request fill. A filling error was defined as an error caught by a pharmacist during the verification step. A dispensing error was defined as an error caught by a pharmacist observer after verification by the pharmacist. Before implementation of the APCS, 422 first-dose or missing medication orders were observed between October 2004 and January 2005. Independent data collected in December 2005, approximately six weeks after the introduction of the APCS, found that filling and error rates had increased. The filling rate for automated dispensing cabinets was associated with the largest decrease in errors. Filling and dispensing error rates had decreased by December 2005. In terms of interdepartmental request fill, no dispensing errors were noted in 123 clinic orders dispensed before the implementation of the APCS. One dispensing error out of 85 clinic orders was identified after implementation of the APCS. The implementation of an APCS at a university hospital decreased medication filling errors related to automated cabinets only and did not affect other filling and dispensing errors.

  17. [Characters of infiltration and preferential flow of black soil in Northeast China under different tillage patterns].

    PubMed

    Li, Wen-Feng; Zhang, Xiao-Ping; Liang, Ai-Zhen; Shen, Yan; Shi, Xiu-Huan; Luo, Jin-Ming; Yang, Xue-Ming

    2008-07-01

    By using dye tracer and double-ring infiltrometer techniques, the characters of infiltration and preferential flow of black soil under no-tillage (NT) and fall moldboard plow (MP) were compared after six years continuous management. The results showed that the infiltration rate was higher under NT than under MP. When the infiltration reached steady, the infiltration rate and accumulative infiltration capacity under NT were 1.35 and 1.44 times as high as those under MP, respectively. The penetration depth of methylene blue reached a depth of 43 cm in NT soil, being 16 cm deeper than that in MP soil. Comparing with MP soil, NT soil had better development of pore structure and more biological pores, and presented better preferential flow character, which were of importance for water infiltration and soil and water conservation.

  18. Temperament and character personality dimensions in patients with dental anxiety.

    PubMed

    Bergdahl, Maud; Bergdahl, Jan

    2003-04-01

    The aim of the present study was to investigate character and temperament dimensions of personality in six men and 31 women (aged 20-57 yr) with severe dental anxiety, and to evaluate whether these dimensions were associated with the level of dental anxiety. The Dental Anxiety Scale (DAS) and the Temperament and Character Inventory (TCI) were used. High ratings in novelty seeking and female gender predicted high DAS scores. Compared with controls, the patients scored significantly higher on the temperament dimension, novelty seeking. For character dimensions, the patients scored lower on cooperativeness and higher on self-transcendence than controls. Our results indicated that patients with dental anxiety are neurotic extravert (i.e. novelty seekers who experience brief dissociative periods and magical thinking). Furthermore, the combination of the inherited temperament dimension novelty seeking and the social learned character dimension cooperativeness and self-transcendence seem to form a vulnerable personality to develop dental anxiety.

  19. How jurors use and misuse character evidence.

    PubMed

    Hunt, Jennifer S; Budesheim, Thomas Lee

    2004-04-01

    The Federal Rules of Evidence allow defendants to offer testimony about their good character, but that testimony can be impeached with cross-examination or a rebuttal witness. It is assumed that jurors use the defense's character evidence (CE) to form guilt and conviction judgments but use impeachment evidence only to assess the character witness's credibility. Two experiments tested these assumptions by presenting mock jurors with various forms of CE and impeachment. Participants made trait ratings for the character witness and defendant and guilt and conviction judgments. Positive CE did not affect guilt or conviction judgments, but cross-examination caused a backlash in which judgments were harsher than when no CE was given. Using path analysis, the authors tested a model of the process by which CE and impeachment affect defendant and witness impressions and guilt and conviction judgments. Implications for juror decision making are discussed.

  20. Differential detection in quadrature-quadrature phase shift keying (Q2PSK) systems

    NASA Astrophysics Data System (ADS)

    El-Ghandour, Osama M.; Saha, Debabrata

    1991-05-01

    A generalized quadrature-quadrature phase shift keying (Q2PSK) signaling format is considered for differential encoding and differential detection. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. Symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/N0. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK. When the error is due to AWGN, the ratio of double error rate to single error rate can be very high, and the ratio may approach zero at high SNR. To improve error rate, differential detection through maximum-likelihood decoding based on multiple or N symbol observations is considered. If N and SNR are large this decoding gives a 3-dB advantage in error rate over conventional N = 2 differential detection, fully recovering the energy loss (as compared to coherent detection) if the observation is extended to a large number of symbol durations.

  1. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    NASA Astrophysics Data System (ADS)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  2. Different personalities between depression and anxiety.

    PubMed

    Tanaka, E; Sakamoto, S; Kijima, N; Kitamura, T

    1998-12-01

    We examined the different personality dimensions between depression and anxiety with Cloninger's seven-factor model of temperament and character. The Temperament and Character Inventory (TCI), which measures four temperament and three character dimensions of Cloninger's personality theory (125-item short version), the Self-rating Depression Scale (SDS), and the State-Trait Anxiety Inventory (STAI) were administered to 223 Japanese students. With hierarchical regression analysis, the SDS score was predicted by scores of Harm-Avoidance, Self-Directedness, and Self-Transcendence, even after controlling for the STAI score. The STAI score was predicted by scores of Self-Directedness and Cooperativeness, even after controlling for the SDS score. More importance should be attached to these dimensions of character because they might contribute to both depression and anxiety.

  3. The Inaccuracy of National Character Stereotypes

    PubMed Central

    McCrae, Robert R.; Chan, Wayne; Jussim, Lee; De Fruyt, Filip; Löckenhoff, Corinna E.; De Bolle, Marleen; Costa, Paul T.; Hřebíčková, Martina; Graf, Sylvie; Realo, Anu; Allik, Jüri; Nakazato, Katsuharu; Shimonaka, Yoshiko; Yik, Michelle; Ficková, Emília; Brunner-Sciarra, Marina; Reátigui, Norma; de Figueora, Nora Leibovich; Schmidt, Vanina; Ahn, Chang-kyu; Ahn, Hyun-nie; Aguilar-Vafaie, Maria E.; Siuta, Jerzy; Szmigielska, Barbara; Cain, Thomas R.; Crawford, Jarret T.; Mastor, Khairul Anwar; Rolland, Jean-Pierre; Nansubuga, Florence; Miramontez, Daniel R.; Benet-Martínez, Veronica; Rossier, Jérôme; Bratko, Denis; Marušić, Iris; Halberstadt, Jamin; Yamaguchi, Mami; Knežević, Goran; Purić, Danka; Martin, Thomas A.; Gheorghiu, Mirona; Smith, Peter B.; Barbaranelli, Claudio; Wang, Lei; Shakespeare-Finch, Jane; Lima, Margarida P.; Klinkosz, Waldemar; Sekowski, Andrzej; Alcalay, Lidia; Simonetti, Franco; Avdeyeva, Tatyana V.; Pramila, V. S.; Terracciano, Antonio

    2013-01-01

    Consensual stereotypes of some groups are relatively accurate, whereas others are not. Previous work suggesting that national character stereotypes are inaccurate has been criticized on several grounds. In this article we (a) provide arguments for the validity of assessed national mean trait levels as criteria for evaluating stereotype accuracy; and (b) report new data on national character in 26 cultures from descriptions (N=3,323) of the typical male or female adolescent, adult, or old person in each. The average ratings were internally consistent and converged with independent stereotypes of the typical culture member, but were weakly related to objective assessments of personality. We argue that this conclusion is consistent with the broader literature on the inaccuracy of national character stereotypes. PMID:24187394

  4. A distinguishing method of printed and handwritten legal amount on Chinese bank check

    NASA Astrophysics Data System (ADS)

    Zhu, Ningbo; Lou, Zhen; Yang, Jingyu

    2003-09-01

    While carrying out Optical Chinese Character Recognition, distinguishing the font between printed and handwritten characters at the early phase is necessary, because there is so much difference between the methods on recognizing these two types of characters. In this paper, we proposed a good method on how to banish seals and its relative standards that can judge whether they should be banished. Meanwhile, an approach on clearing up scattered noise shivers after image segmentation is presented. Four sets of classifying features that show discrimination between printed and handwritten characters are well adopted. The proposed approach was applied to an automatic check processing system and tested on about 9031 checks. The recognition rate is more than 99.5%.

  5. Investigating the tool marks on oracle bones inscriptions from the Yinxu site (ca., 1319-1046 BC), Henan province, China.

    PubMed

    Zhao, Xiaolong; Tang, Jigen; Gu, Zhou; Shi, Jilong; Yang, Yimin; Wang, Changsui

    2016-09-01

    Oracle Bone Inscriptions in the Shang dynasty (1600-1046 BC) are the earliest well-developed writing forms of the Chinese character system, and their carving techniques have not been studied by tool marks analysis with microscopy. In this study, a digital microscope with three-dimensional surface reconstruction based on extended depth of focus technology was used to investigate tool marks on the surface of four pieces of oracle bones excavated at the eastern area of Huayuanzhuang, Yinxu site(ca., 1319-1046 BC), the last capital of the Shang dynasty, Henan province, China. The results show that there were two procedures to carve the characters on the analyzed tortoise shells. The first procedure was direct carving. The second was "outlining design," which means to engrave a formal character after engraving a draft with a pointed tool. Most of the strokes developed by an engraver do not overlap the smaller draft, which implies that the outlining design would be a sound way to avoid errors such as wrong and missing characters. The strokes of these characters have different shape at two ends and variations on width and depth of the grooves. Moreover, the bottom of the grooves is always rugged. Thus, the use of rotary wheel-cutting tools could be ruled out. In most cases, the starting points of the strokes are round or flat while the finishing points are always pointed. Moreover, the strokes should be engraved from top to bottom. When vertical or horizontal strokes had been engraved, the shell would be turned about 90 degrees to engrave the crossed strokes from top to bottom. There was no preferred order to engrave vertical or horizontal strokes. Since both sides of the grooves of the characters are neat and there exists no unorganized tool marks, then it is suggested that some sharp tools had been used for engraving characters on the shells. Microsc. Res. Tech. 79:827-832, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. Executive Council lists and general practitioner files

    PubMed Central

    Farmer, R. D. T.; Knox, E. G.; Cross, K. W.; Crombie, D. L.

    1974-01-01

    An investigation of the accuracy of general practitioner and Executive Council files was approached by a comparison of the two. High error rates were found, including both file errors and record errors. On analysis it emerged that file error rates could not be satisfactorily expressed except in a time-dimensioned way, and we were unable to do this within the context of our study. Record error rates and field error rates were expressible as proportions of the number of records on both the lists; 79·2% of all records exhibited non-congruencies and particular information fields had error rates ranging from 0·8% (assignation of sex) to 68·6% (assignation of civil state). Many of the errors, both field errors and record errors, were attributable to delayed updating of mutable information. It is concluded that the simple transfer of Executive Council lists to a computer filing system would not solve all the inaccuracies and would not in itself permit Executive Council registers to be used for any health care applications requiring high accuracy. For this it would be necessary to design and implement a purpose designed health care record system which would include, rather than depend upon, the general practitioner remuneration system. PMID:4816588

  7. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system

    PubMed Central

    Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.

    2015-01-01

    Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702

  8. Non-equilibrium dynamics and floral trait interactions shape extant angiosperm diversity

    PubMed Central

    O'Meara, Brian C.; Smith, Stacey D.; Armbruster, W. Scott; Harder, Lawrence D.; Hardy, Christopher R.; Hileman, Lena C.; Hufford, Larry; Litt, Amy; Magallón, Susana; Smith, Stephen A.; Stevens, Peter F.; Fenster, Charles B.; Diggle, Pamela K.

    2016-01-01

    Why are some traits and trait combinations exceptionally common across the tree of life, whereas others are vanishingly rare? The distribution of trait diversity across a clade at any time depends on the ancestral state of the clade, the rate at which new phenotypes evolve, the differences in speciation and extinction rates across lineages, and whether an equilibrium has been reached. Here we examine the role of transition rates, differential diversification (speciation minus extinction) and non-equilibrium dynamics on the evolutionary history of angiosperms, a clade well known for the abundance of some trait combinations and the rarity of others. Our analysis reveals that three character states (corolla present, bilateral symmetry, reduced stamen number) act synergistically as a key innovation, doubling diversification rates for lineages in which this combination occurs. However, this combination is currently less common than predicted at equilibrium because the individual characters evolve infrequently. Simulations suggest that angiosperms will remain far from the equilibrium frequencies of character states well into the future. Such non-equilibrium dynamics may be common when major innovations evolve rarely, allowing lineages with ancestral forms to persist, and even outnumber those with diversification-enhancing states, for tens of millions of years. PMID:27147092

  9. Non-equilibrium dynamics and floral trait interactions shape extant angiosperm diversity.

    PubMed

    O'Meara, Brian C; Smith, Stacey D; Armbruster, W Scott; Harder, Lawrence D; Hardy, Christopher R; Hileman, Lena C; Hufford, Larry; Litt, Amy; Magallón, Susana; Smith, Stephen A; Stevens, Peter F; Fenster, Charles B; Diggle, Pamela K

    2016-05-11

    Why are some traits and trait combinations exceptionally common across the tree of life, whereas others are vanishingly rare? The distribution of trait diversity across a clade at any time depends on the ancestral state of the clade, the rate at which new phenotypes evolve, the differences in speciation and extinction rates across lineages, and whether an equilibrium has been reached. Here we examine the role of transition rates, differential diversification (speciation minus extinction) and non-equilibrium dynamics on the evolutionary history of angiosperms, a clade well known for the abundance of some trait combinations and the rarity of others. Our analysis reveals that three character states (corolla present, bilateral symmetry, reduced stamen number) act synergistically as a key innovation, doubling diversification rates for lineages in which this combination occurs. However, this combination is currently less common than predicted at equilibrium because the individual characters evolve infrequently. Simulations suggest that angiosperms will remain far from the equilibrium frequencies of character states well into the future. Such non-equilibrium dynamics may be common when major innovations evolve rarely, allowing lineages with ancestral forms to persist, and even outnumber those with diversification-enhancing states, for tens of millions of years. © 2016 The Author(s).

  10. Cross Section Sensitivity and Propagated Errors in HZE Exposures

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.; Wilson, John W.; Blatnig, Steve R.; Qualls, Garry D.; Badavi, Francis F.; Cucinotta, Francis A.

    2005-01-01

    It has long been recognized that galactic cosmic rays are of such high energy that they tend to pass through available shielding materials resulting in exposure of astronauts and equipment within space vehicles and habitats. Any protection provided by shielding materials result not so much from stopping such particles but by changing their physical character in interaction with shielding material nuclei forming, hopefully, less dangerous species. Clearly, the fidelity of the nuclear cross-sections is essential to correct specification of shield design and sensitivity to cross-section error is important in guiding experimental validation of cross-section models and database. We examine the Boltzmann transport equation which is used to calculate dose equivalent during solar minimum, with units (cSv/yr), associated with various depths of shielding materials. The dose equivalent is a weighted sum of contributions from neutrons, protons, light ions, medium ions and heavy ions. We investigate the sensitivity of dose equivalent calculations due to errors in nuclear fragmentation cross-sections. We do this error analysis for all possible projectile-fragment combinations (14,365 such combinations) to estimate the sensitivity of the shielding calculations to errors in the nuclear fragmentation cross-sections. Numerical differentiation with respect to the cross-sections will be evaluated in a broad class of materials including polyethylene, aluminum and copper. We will identify the most important cross-sections for further experimental study and evaluate their impact on propagated errors in shielding estimates.

  11. Multi-font printed Mongolian document recognition system

    NASA Astrophysics Data System (ADS)

    Peng, Liangrui; Liu, Changsong; Ding, Xiaoqing; Wang, Hua; Jin, Jianming

    2009-01-01

    Mongolian is one of the major ethnic languages in China. Large amount of Mongolian printed documents need to be digitized in digital library and various applications. Traditional Mongolian script has unique writing style and multi-font-type variations, which bring challenges to Mongolian OCR research. As traditional Mongolian script has some characteristics, for example, one character may be part of another character, we define the character set for recognition according to the segmented components, and the components are combined into characters by rule-based post-processing module. For character recognition, a method based on visual directional feature and multi-level classifiers is presented. For character segmentation, a scheme is used to find the segmentation point by analyzing the properties of projection and connected components. As Mongolian has different font-types which are categorized into two major groups, the parameter of segmentation is adjusted for each group. A font-type classification method for the two font-type group is introduced. For recognition of Mongolian text mixed with Chinese and English, language identification and relevant character recognition kernels are integrated. Experiments show that the presented methods are effective. The text recognition rate is 96.9% on the test samples from practical documents with multi-font-types and mixed scripts.

  12. [Comparative analysis of agronomic and qualitative characters in different lines of Dendrobium denneanum].

    PubMed

    He, Tao; Deng, Li; Lin, Yuan; Li, Bo; Yang, Xiaofan; Wang, Fang; Chun, Ze

    2010-08-01

    To provide theoretical basis for breeding good variety of Dendrobium denneanum, agronomic and qualitative characters of 4 different lines and relationships among them were studied. The stem length, stem diameter, leaf length, leaf width, length/ width ratio and leaf area were measured. The single fresh and dry stem was weighed and drying rate was calculated. The contents of polysaccharides and total alkaloids were determined by sulfuric acid-phenol colorimetry and acid-dye colorimetry, respectively. The correlations between characters were analyzed. The results showed that differences in major agronomic characters between four lines were significant. The plant types of dq-1 and dq-2 were higher, dq-3 was medium and dq-4 was lower. The fresh weigh of stem and content of polysaccharides were the highest in dq-2, 7.81 g and 14.33%. While the highest content of total alkaloids and was 0. 486% in dq-3. There were significant correlations between agronomic characters, but these characters had low or non correlations with qualitative characters such as polysaccharides and total alkaloids. It was shown that the content of polysaccharides and total alkaloids were significantly different among 4 lines of D. denneanum, which could be selected for different uses.

  13. RESEARCH ON ROBUST METHODS FOR EXTRACTING AND RECOGNIZING PHOTOGRAPHY MANAGEMENT ITEMS FROM VARIOUS IMAGE DATA Of CONSTRUCTION

    NASA Astrophysics Data System (ADS)

    Kitagawa, Etsuji; Tanaka, Shigenori; Abiko, Satoshi; Wakabayashi, Katsuma; Jiang, Wenyuan

    Recently, an electronic delivery for various documents is carried out by Ministry of Land, Infrastructure, Transport and Tourism in construction fields. One of them is image data of construction photography that must be delivered with information of photography management items such as construction name or type of works, etc. However, there is a problem that a lot of cost is needed to treat contents of these items from characters printed and handwritten on blackboard into these image data. In this research, we develop the system which can treat contents of these items by extracting contents of these items from the image data of construction photography taken in various scenes with preprocessing the image, recognizing characters with OCR and correcting error with natural language process. And we confirm the effectiveness of the system, by experimenting in each function of system and in entire system.

  14. Application of the ANNA neural network chip to high-speed character recognition.

    PubMed

    Sackinger, E; Boser, B E; Bromley, J; Lecun, Y; Jackel, L D

    1992-01-01

    A neural network with 136000 connections for recognition of handwritten digits has been implemented using a mixed analog/digital neural network chip. The neural network chip is capable of processing 1000 characters/s. The recognition system has essentially the same rate (5%) as a simulation of the network with 32-b floating-point precision.

  15. Sympathy for the Devil: Killing the Other in "Milk" and "The Reader"

    ERIC Educational Resources Information Center

    Beck, Bernard

    2009-01-01

    Two recent, highly rated movies depict central characters who are involved in killing members of groups despised in their societies. In "Milk" and "The Reader," the characters of Dan White and Hanna Schmitz, respectively, are treated with empathy and a search for understanding. Their personal sufferings and confusions are highlighted, and the…

  16. Measuring Collective Intelligence in Human-Machine Systems

    DTIC Science & Technology

    2013-12-09

    addition, the Doonesbury comic strip on July 15, 2012, was based on the results of our research (see http://doonesbury. slate.com/strip/archive/2012/07/15...The comic strip includes one "error" in its summary of our research results. The Doonesbury character says that "Group IQ doesn’t correlate with... comic strip correspond very closely to the actual results of our research as published in Science magazine. Other honors include: (a) Malone

  17. Well, what about intraspecific variation? Taxonomic and phylogenetic characters in the genus Synoeca de Saussure (Hymenoptera, Vespidae).

    PubMed

    Carpenter, James M; Andena, Sergio R; Noll, Fernando B; Wenzel, John W

    2013-01-01

    Cely and Sarmiento (2011) took issue with the cladistic analysis of relationships among species of the genus Synoeca by Andena et al. (2009a), and presented a reanalysis. They claimed that intraspecific variation in the genus is meaningful, and proper consideration yields a conclusion different from that of Andena et al. Both their critique and reanalysis are vitiated by numerous errors, as is shown in the present paper.

  18. Recruiter Stress: An Experiment Using Text-messages as a Stress Intervention

    DTIC Science & Technology

    2013-02-01

    messages covered a number of areas (e.g., physiological, cognitive , social) and were generally self-contained with 140 characters or less (to meet text...although most differences here are small and within margins of errors. The two most noticeable differences are for the impact of work stress on job...order to compute estimated hour and day losses. Using recruiter population estimates, over 4,500 man-hours were lost due to late arrivals and over 9,500

  19. Selecting a restoration technique to minimize OCR error.

    PubMed

    Cannon, M; Fugate, M; Hush, D R; Scovel, C

    2003-01-01

    This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.

  20. Template protection and its implementation in 3D face recognition systems

    NASA Astrophysics Data System (ADS)

    Zhou, Xuebing

    2007-04-01

    As biometric recognition systems are widely applied in various application areas, security and privacy risks have recently attracted the attention of the biometric community. Template protection techniques prevent stored reference data from revealing private biometric information and enhance the security of biometrics systems against attacks such as identity theft and cross matching. This paper concentrates on a template protection algorithm that merges methods from cryptography, error correction coding and biometrics. The key component of the algorithm is to convert biometric templates into binary vectors. It is shown that the binary vectors should be robust, uniformly distributed, statistically independent and collision-free so that authentication performance can be optimized and information leakage can be avoided. Depending on statistical character of the biometric template, different approaches for transforming biometric templates into compact binary vectors are presented. The proposed methods are integrated into a 3D face recognition system and tested on the 3D facial images of the FRGC database. It is shown that the resulting binary vectors provide an authentication performance that is similar to the original 3D face templates. A high security level is achieved with reasonable false acceptance and false rejection rates of the system, based on an efficient statistical analysis. The algorithm estimates the statistical character of biometric templates from a number of biometric samples in the enrollment database. For the FRGC 3D face database, the small distinction of robustness and discriminative power between the classification results under the assumption of uniquely distributed templates and the ones under the assumption of Gaussian distributed templates is shown in our tests.

  1. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains.

    PubMed

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-05-01

    Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  2. Preventing Negative Behaviors Among Elementary-School Students Through Enhancing Students’ Social-Emotional and Character Development

    PubMed Central

    Snyder, Frank J.; Acock, Alan C.; Vuchinich, Samuel; Beets, Michael W.; Washburn, Isaac J.; Flay, Brian R.

    2013-01-01

    Purpose Examine the effects of a comprehensive, school-wide social-emotional and character development program using a positive youth development perspective. Specifically, we examined a mediation mechanism whereby positive academic-related behaviors mediated the intervention effects on substance use, violence, and sexual activity. Design Matched-pair, cluster-randomized, controlled design. Setting Twenty (10 intervention and 10 control) racially/ethnically diverse schools in Hawaii. Subjects Elementary-aged students (N = 1784) from grade 5. Intervention The Positive Action program. Measures Students self-reported their academic behaviors, together with their substance use, violence, and voluntary sexual activity; teachers rated students’ academic behaviors, substance use, and violence. Analysis Structural equation modeling. Results Students attending intervention schools reported significantly better academic behavior (B = .273, SE = .039, p < .001) and significantly less substance use (B = −.970, SE = .292, p < .01, incidence-rate ratio [IRR] = .379), violence (B = −1.410, SE = .296, p < .001, IRR= .244), and sexual activity (B = − 2.415, SE = .608, p < .001, odds ratio = .089); boys reported more negative behaviors than girls. Intervention effects on student-reported substance use, violence, and sexual activity were mediated by positive academic behavior. Teacher reports corroborated these results, with rated academic behavior partially mediating the effects of the intervention on rated negative behaviors. Conclusion This study (1) provides evidence that adds insight into one mechanism through which a social-emotional and character development program affects negative outcomes and (2) supports social-emotional and character development and positive youth development perspectives that posit that focusing on youths’ assets may reduce negative behaviors. PMID:23470183

  3. Classification based upon gene expression data: bias and precision of error rates.

    PubMed

    Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L

    2007-06-01

    Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp

  4. Do Errors on Classroom Reading Tasks Slow Growth in Reading? Technical Report No. 404.

    ERIC Educational Resources Information Center

    Anderson, Richard C.; And Others

    A pervasive finding from research on teaching and classroom learning is that a low rate of error on classroom tasks is associated with large year to year gains in achievement, particularly for reading in the primary grades. The finding of a negative relationship between error rate, especially rate of oral reading errors, and gains in reading…

  5. Estimating genotype error rates from high-coverage next-generation sequence data.

    PubMed

    Wall, Jeffrey D; Tang, Ling Fung; Zerbe, Brandon; Kvale, Mark N; Kwok, Pui-Yan; Schaefer, Catherine; Risch, Neil

    2014-11-01

    Exome and whole-genome sequencing studies are becoming increasingly common, but little is known about the accuracy of the genotype calls made by the commonly used platforms. Here we use replicate high-coverage sequencing of blood and saliva DNA samples from four European-American individuals to estimate lower bounds on the error rates of Complete Genomics and Illumina HiSeq whole-genome and whole-exome sequencing. Error rates for nonreference genotype calls range from 0.1% to 0.6%, depending on the platform and the depth of coverage. Additionally, we found (1) no difference in the error profiles or rates between blood and saliva samples; (2) Complete Genomics sequences had substantially higher error rates than Illumina sequences had; (3) error rates were higher (up to 6%) for rare or unique variants; (4) error rates generally declined with genotype quality (GQ) score, but in a nonlinear fashion for the Illumina data, likely due to loss of specificity of GQ scores greater than 60; and (5) error rates increased with increasing depth of coverage for the Illumina data. These findings, especially (3)-(5), suggest that caution should be taken in interpreting the results of next-generation sequencing-based association studies, and even more so in clinical application of this technology in the absence of validation by other more robust sequencing or genotyping methods. © 2014 Wall et al.; Published by Cold Spring Harbor Laboratory Press.

  6. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  7. Computer calculated dose in paediatric prescribing.

    PubMed

    Kirk, Richard C; Li-Meng Goh, Denise; Packia, Jeya; Min Kam, Huey; Ong, Benjamin K C

    2005-01-01

    Medication errors are an important cause of hospital-based morbidity and mortality. However, only a few medication error studies have been conducted in children. These have mainly quantified errors in the inpatient setting; there is very little data available on paediatric outpatient and emergency department medication errors and none on discharge medication. This deficiency is of concern because medication errors are more common in children and it has been suggested that the risk of an adverse drug event as a consequence of a medication error is higher in children than in adults. The aims of this study were to assess the rate of medication errors in predominantly ambulatory paediatric patients and the effect of computer calculated doses on medication error rates of two commonly prescribed drugs. This was a prospective cohort study performed in a paediatric unit in a university teaching hospital between March 2003 and August 2003. The hospital's existing computer clinical decision support system was modified so that doctors could choose the traditional prescription method or the enhanced method of computer calculated dose when prescribing paracetamol (acetaminophen) or promethazine. All prescriptions issued to children (<16 years of age) at the outpatient clinic, emergency department and at discharge from the inpatient service were analysed. A medication error was defined as to have occurred if there was an underdose (below the agreed value), an overdose (above the agreed value), no frequency of administration specified, no dose given or excessive total daily dose. The medication error rates and the factors influencing medication error rates were determined using SPSS version 12. From March to August 2003, 4281 prescriptions were issued. Seven prescriptions (0.16%) were excluded, hence 4274 prescriptions were analysed. Most prescriptions were issued by paediatricians (including neonatologists and paediatric surgeons) and/or junior doctors. The error rate in the children's emergency department was 15.7%, for outpatients was 21.5% and for discharge medication was 23.6%. Most errors were the result of an underdose (64%; 536/833). The computer calculated dose error rate was 12.6% compared with the traditional prescription error rate of 28.2%. Logistical regression analysis showed that computer calculated dose was an important and independent variable influencing the error rate (adjusted relative risk = 0.436, 95% CI 0.336, 0.520, p < 0.001). Other important independent variables were seniority and paediatric training of the person prescribing and the type of drug prescribed. Medication error, especially underdose, is common in outpatient, emergency department and discharge prescriptions. Computer calculated doses can significantly reduce errors, but other risk factors have to be concurrently addressed to achieve maximum benefit.

  8. Angular rate optimal design for the rotary strapdown inertial navigation system.

    PubMed

    Yu, Fei; Sun, Qian

    2014-04-22

    Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS.

  9. Comparison of Meropenem MICs and Susceptibilities for Carbapenemase-Producing Klebsiella pneumoniae Isolates by Various Testing Methods▿

    PubMed Central

    Bulik, Catharine C.; Fauntleroy, Kathy A.; Jenkins, Stephen G.; Abuali, Mayssa; LaBombardi, Vincent J.; Nicolau, David P.; Kuti, Joseph L.

    2010-01-01

    We describe the levels of agreement between broth microdilution, Etest, Vitek 2, Sensititre, and MicroScan methods to accurately define the meropenem MIC and categorical interpretation of susceptibility against carbapenemase-producing Klebsiella pneumoniae (KPC). A total of 46 clinical K. pneumoniae isolates with KPC genotypes, all modified Hodge test and blaKPC positive, collected from two hospitals in NY were included. Results obtained by each method were compared with those from broth microdilution (the reference method), and agreement was assessed based on MICs and Clinical Laboratory Standards Institute (CLSI) interpretative criteria using 2010 susceptibility breakpoints. Based on broth microdilution, 0%, 2.2%, and 97.8% of the KPC isolates were classified as susceptible, intermediate, and resistant to meropenem, respectively. Results from MicroScan demonstrated the most agreement with those from broth microdilution, with 95.6% agreement based on the MIC and 2.2% classified as minor errors, and no major or very major errors. Etest demonstrated 82.6% agreement with broth microdilution MICs, a very major error rate of 2.2%, and a minor error rate of 2.2%. Vitek 2 MIC agreement was 30.4%, with a 23.9% very major error rate and a 39.1% minor error rate. Sensititre demonstrated MIC agreement for 26.1% of isolates, with a 3% very major error rate and a 26.1% minor error rate. Application of FDA breakpoints had little effect on minor error rates but increased very major error rates to 58.7% for Vitek 2 and Sensititre. Meropenem MIC results and categorical interpretations for carbapenemase-producing K. pneumoniae differ by methodology. Confirmation of testing results is encouraged when an accurate MIC is required for antibiotic dosing optimization. PMID:20484603

  10. Invariant approach to the character classification

    NASA Astrophysics Data System (ADS)

    Šariri, Kristina; Demoli, Nazif

    2008-04-01

    Image moments analysis is a very useful tool which allows image description invariant to translation and rotation, scale change and some types of image distortions. The aim of this work was development of simple method for fast and reliable classification of characters by using Hu's and affine moment invariants. Measure of Eucleidean distance was used as a discrimination feature with statistical parameters estimated. The method was tested in classification of Times New Roman font letters as well as sets of the handwritten characters. It is shown that using all Hu's and three affine invariants as discrimination set improves recognition rate by 30%.

  11. The effectiveness of the error reporting promoting program on the nursing error incidence rate in Korean operating rooms.

    PubMed

    Kim, Myoung-Soo; Kim, Jung-Soon; Jung, In Sook; Kim, Young Hae; Kim, Ho Jung

    2007-03-01

    The purpose of this study was to develop and evaluate an error reporting promoting program(ERPP) to systematically reduce the incidence rate of nursing errors in operating room. A non-equivalent control group non-synchronized design was used. Twenty-six operating room nurses who were in one university hospital in Busan participated in this study. They were stratified into four groups according to their operating room experience and were allocated to the experimental and control groups using a matching method. Mann-Whitney U Test was used to analyze the differences pre and post incidence rates of nursing errors between the two groups. The incidence rate of nursing errors decreased significantly in the experimental group compared to the pre-test score from 28.4% to 15.7%. The incidence rate by domains, it decreased significantly in the 3 domains-"compliance of aseptic technique", "management of document", "environmental management" in the experimental group while it decreased in the control group which was applied ordinary error-reporting method. Error-reporting system can make possible to hold the errors in common and to learn from them. ERPP was effective to reduce the errors of recognition-related nursing activities. For the wake of more effective error-prevention, we will be better to apply effort of risk management along the whole health care system with this program.

  12. Sunspot drawings handwritten character recognition method based on deep learning

    NASA Astrophysics Data System (ADS)

    Zheng, Sheng; Zeng, Xiangyun; Lin, Ganghua; Zhao, Cui; Feng, Yongli; Tao, Jinping; Zhu, Daoyuan; Xiong, Li

    2016-05-01

    High accuracy scanned sunspot drawings handwritten characters recognition is an issue of critical importance to analyze sunspots movement and store them in the database. This paper presents a robust deep learning method for scanned sunspot drawings handwritten characters recognition. The convolution neural network (CNN) is one algorithm of deep learning which is truly successful in training of multi-layer network structure. CNN is used to train recognition model of handwritten character images which are extracted from the original sunspot drawings. We demonstrate the advantages of the proposed method on sunspot drawings provided by Chinese Academy Yunnan Observatory and obtain the daily full-disc sunspot numbers and sunspot areas from the sunspot drawings. The experimental results show that the proposed method achieves a high recognition accurate rate.

  13. Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection

    PubMed Central

    Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J

    2017-01-01

    Background The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. Objective We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term “validation relaxation.” Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. Methods We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of “required” constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. Results The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. Conclusions A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. PMID:28821474

  14. Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection.

    PubMed

    Kenny, Avi; Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J

    2017-08-18

    The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term "validation relaxation." Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of "required" constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. ©Avi Kenny, Nicholas Gordon, Thomas Griffiths, John D Kraemer, Mark J Siedner. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 18.08.2017.

  15. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 1; Improved Method and Uncertainties

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.; hide

    2006-01-01

    A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.

  16. DENSITY-DEPENDENT SELECTION ON CONTINUOUS CHARACTERS: A QUANTITATIVE GENETIC MODEL.

    PubMed

    Tanaka, Yoshinari

    1996-10-01

    A quantitative genetic model of density-dependent selection is presented and analysed with parameter values obtained from laboratory selection experiments conducted by Mueller and his coworkers. The ecological concept of r- and K-selection is formulated in terms of selection gradients on underlying phenotypic characters that influence the density-dependent measure of fitness. Hence the selection gradients on traits are decomposed into two components, one that changes in the direction to increase r, and one that changes in the direction to increase K. The relative importance of the two components is determined by temporal fluctuations in population density. The evolutionary rate of r and K (per-generation changes in r and K due to the genetic responses of the underlying traits) is also formulated. Numerical simulation has shown that with moderate genetic variances of the underlying characters, r and K can evolve rapidly and the evolutionary rate is influenced by synergistic interaction between characters that contribute to r and K. But strong r-selection can occur only with severe and continuous disturbances of populations so that the population density is kept low enough to prevent K-selection. © 1996 The Society for the Study of Evolution.

  17. Are common names becoming less common? The rise in uniqueness and individualism in Japan.

    PubMed

    Ogihara, Yuji; Fujita, Hiroyo; Tominaga, Hitoshi; Ishigaki, Sho; Kashimoto, Takuya; Takahashi, Ayano; Toyohara, Kyoko; Uchida, Yukiko

    2015-01-01

    We examined whether Japanese culture has become more individualistic by investigating how the practice of naming babies has changed over time. Cultural psychology has revealed substantial cultural variation in human psychology and behavior, emphasizing the mutual construction of socio-cultural environment and mind. However, much of the past research did not account for the fact that culture is changing. Indeed, archival data on behavior (e.g., divorce rates) suggest a rise in individualism in the U.S. and Japan. In addition to archival data, cultural products (which express an individual's psyche and behavior outside the head; e.g., advertising) can also reveal cultural change. However, little research has investigated the changes in individualism in East Asia using cultural products. To reveal the dynamic aspects of culture, it is important to present temporal data across cultures. In this study, we examined baby names as a cultural product. If Japanese culture has become more individualistic, parents would be expected to give their children unique names. Using two databases, we calculated the rate of popular baby names between 2004 and 2013. Both databases released the rankings of popular names and their rates within the sample. As Japanese names are generally comprised of both written Chinese characters and their pronunciations, we analyzed these two separately. We found that the rate of popular Chinese characters increased, whereas the rate of popular pronunciations decreased. However, only the rate of popular pronunciations was associated with a previously validated collectivism index. Moreover, we examined the pronunciation variation of common combinations of Chinese characters and the written form variation of common pronunciations. We found that the variation of written forms decreased, whereas the variation of pronunciations increased over time. Taken together, these results showed that parents are giving their children unique names by pairing common Chinese characters with uncommon pronunciations, which indicates an increase in individualism in Japan.

  18. Are common names becoming less common? The rise in uniqueness and individualism in Japan

    PubMed Central

    Ogihara, Yuji; Fujita, Hiroyo; Tominaga, Hitoshi; Ishigaki, Sho; Kashimoto, Takuya; Takahashi, Ayano; Toyohara, Kyoko; Uchida, Yukiko

    2015-01-01

    We examined whether Japanese culture has become more individualistic by investigating how the practice of naming babies has changed over time. Cultural psychology has revealed substantial cultural variation in human psychology and behavior, emphasizing the mutual construction of socio-cultural environment and mind. However, much of the past research did not account for the fact that culture is changing. Indeed, archival data on behavior (e.g., divorce rates) suggest a rise in individualism in the U.S. and Japan. In addition to archival data, cultural products (which express an individual’s psyche and behavior outside the head; e.g., advertising) can also reveal cultural change. However, little research has investigated the changes in individualism in East Asia using cultural products. To reveal the dynamic aspects of culture, it is important to present temporal data across cultures. In this study, we examined baby names as a cultural product. If Japanese culture has become more individualistic, parents would be expected to give their children unique names. Using two databases, we calculated the rate of popular baby names between 2004 and 2013. Both databases released the rankings of popular names and their rates within the sample. As Japanese names are generally comprised of both written Chinese characters and their pronunciations, we analyzed these two separately. We found that the rate of popular Chinese characters increased, whereas the rate of popular pronunciations decreased. However, only the rate of popular pronunciations was associated with a previously validated collectivism index. Moreover, we examined the pronunciation variation of common combinations of Chinese characters and the written form variation of common pronunciations. We found that the variation of written forms decreased, whereas the variation of pronunciations increased over time. Taken together, these results showed that parents are giving their children unique names by pairing common Chinese characters with uncommon pronunciations, which indicates an increase in individualism in Japan. PMID:26557100

  19. Violence in Teen-Rated Video Games

    PubMed Central

    Haninger, Kevin; Ryan, M. Seamus; Thompson, Kimberly M

    2004-01-01

    Context: Children's exposure to violence in the media remains a source of public health concern; however, violence in video games rated T (for “Teen”) by the Entertainment Software Rating Board (ESRB) has not been quantified. Objective: To quantify and characterize the depiction of violence and blood in T-rated video games. According to the ESRB, T-rated video games may be suitable for persons aged 13 years and older and may contain violence, mild or strong language, and/or suggestive themes. Design: We created a database of all 396 T-rated video game titles released on the major video game consoles in the United States by April 1, 2001 to identify the distribution of games by genre and to characterize the distribution of content descriptors for violence and blood assigned to these games. We randomly sampled 80 game titles (which included 81 games because 1 title included 2 separate games), played each game for at least 1 hour, and quantitatively assessed the content. Given the release of 2 new video game consoles, Microsoft Xbox and Nintendo GameCube, and a significant number of T-rated video games released after we drew our random sample, we played and assessed 9 additional games for these consoles. Finally, we assessed the content of 2 R-rated films, The Matrix and The Matrix: Reloaded, associated with the T-rated video game Enter the Matrix. Main Outcome Measures: Game genre; percentage of game play depicting violence; depiction of injury; depiction of blood; number of human and nonhuman fatalities; types of weapons used; whether injuring characters, killing characters, or destroying objects is rewarded or is required to advance in the game; and content that may raise concerns about marketing T-rated video games to children. Results: Based on analysis of the 396 T-rated video game titles, 93 game titles (23%) received content descriptors for both violence and blood, 280 game titles (71%) received only a content descriptor for violence, 9 game titles (2%) received only a content descriptor for blood, and 14 game titles (4%) received no content descriptors for violence or blood. In the random sample of 81 T-rated video games we played, 79 games (98%) involved intentional violence for an average of 36% of game play time, and 34 games (42%) contained blood. More than half of the games (51%) depicted 5 or more types of weapons, with players able to select weapons in 48 games (59%). We observed 37 games (46%) that rewarded or required the player to destroy objects, 73 games (90%) that rewarded or required the player to injure characters, and 56 games (69%) that rewarded or required the player to kill. We observed a total of 11,499 character deaths in the 81 games, occurring at an average rate of 122 deaths per hour of game play (range 0 to 1310). This included 5689 human deaths, occurring at an average rate of 61 human deaths per hour of game play (range 0 to 1291). Overall, we identified 44 games (54%) that depicted deaths to nonhuman characters and 51 games (63%) that depicted deaths to human characters, including the player. Conclusions: Content analysis suggests a significant amount of violence, injury, and death in T-rated video games. Given the large amount of violence involving guns and knives, the relative lack of blood suggests that many T-rated video games do not realistically portray the consequences of violence. Physicians and parents should appreciate that T-rated video games may be a source of exposure to violence and some unexpected content for children and adolescents, and that the majority of T-rated video games provide incentives to the players to commit simulated acts of violence. PMID:15208514

  20. Violence in teen-rated video games.

    PubMed

    Haninger, Kevin; Ryan, M Seamus; Thompson, Kimberly M

    2004-03-11

    Children's exposure to violence in the media remains a source of public health concern; however, violence in video games rated T (for "Teen") by the Entertainment Software Rating Board (ESRB) has not been quantified. To quantify and characterize the depiction of violence and blood in T-rated video games. According to the ESRB, T-rated video games may be suitable for persons aged 13 years and older and may contain violence, mild or strong language, and/or suggestive themes. We created a database of all 396 T-rated video game titles released on the major video game consoles in the United States by April 1, 2001 to identify the distribution of games by genre and to characterize the distribution of content descriptors for violence and blood assigned to these games. We randomly sampled 80 game titles (which included 81 games because 1 title included 2 separate games), played each game for at least 1 hour, and quantitatively assessed the content. Given the release of 2 new video game consoles, Microsoft Xbox and Nintendo GameCube, and a significant number of T-rated video games released after we drew our random sample, we played and assessed 9 additional games for these consoles. Finally, we assessed the content of 2 R-rated films, The Matrix and The Matrix: Reloaded, associated with the T-rated video game Enter the Matrix. Game genre; percentage of game play depicting violence; depiction of injury; depiction of blood; number of human and nonhuman fatalities; types of weapons used; whether injuring characters, killing characters, or destroying objects is rewarded or is required to advance in the game; and content that may raise concerns about marketing T-rated video games to children. Based on analysis of the 396 T-rated video game titles, 93 game titles (23%) received content descriptors for both violence and blood, 280 game titles (71%) received only a content descriptor for violence, 9 game titles (2%) received only a content descriptor for blood, and 14 game titles (4%) received no content descriptors for violence or blood. In the random sample of 81 T-rated video games we played, 79 games (98%) involved intentional violence for an average of 36% of game play time, and 34 games (42%) contained blood. More than half of the games (51%) depicted 5 or more types of weapons, with players able to select weapons in 48 games (59%). We observed 37 games (46%) that rewarded or required the player to destroy objects, 73 games (90%) that rewarded or required the player to injure characters, and 56 games (69%) that rewarded or required the player to kill. We observed a total of 11,499 character deaths in the 81 games, occurring at an average rate of 122 deaths per hour of game play (range 0 to 1310). This included 5689 human deaths, occurring at an average rate of 61 human deaths per hour of game play (range 0 to 1291). Overall, we identified 44 games (54%) that depicted deaths to nonhuman characters and 51 games (63%) that depicted deaths to human characters, including the player. Content analysis suggests a significant amount of violence, injury, and death in T-rated video games. Given the large amount of violence involving guns and knives, the relative lack of blood suggests that many T-rated video games do not realistically portray the consequences of violence. Physicians and parents should appreciate that T-rated video games may be a source of exposure to violence and some unexpected content for children and adolescents, and that the majority of T-rated video games provide incentives to the players to commit simulated acts of violence.

  1. Giving dementia a face? The portrayal of older people with dementia in German weekly news magazines between the years 2000 and 2009.

    PubMed

    Kessler, Eva-Marie; Schwender, Clemens

    2012-03-01

    We investigated photographic depictions of older people with dementia in news magazines according to their frequency as well as observable characteristics of the characters. We examined all 2,604 photos appearing in articles identified using the key words "dementia" and "Alzheimer's" published in the 4 major German weekly news magazines between 2000 and 2009. According to the body text and/or the legend, 154 characters with dementia were identified. Trained judges rated the age and gender of each character as well as the emotional expression, physical functioning, physical surroundings, and social context of the characters. Visual representations of characters with dementia linearly increased across time (both in terms of absolute and relative figures). Women were shown more often than men. Young-old and old-old characters were depicted equally often. Characters were mostly depicted as having positive emotions and good functional health. A large majority of characters were shown in individualized contexts and together with social partners. Only 2 social partners displayed negative emotions, and he/she was a "helper" in less than one third of cases. Despite the overall low frequency of photos of older people with dementia, dementia seems to have "acquired a face" across the past decade. Although our analysis revealed a heterogeneous portrayal of older people with dementia, "positive" representations clearly prevailed.

  2. An error criterion for determining sampling rates in closed-loop control systems

    NASA Technical Reports Server (NTRS)

    Brecher, S. M.

    1972-01-01

    The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.

  3. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system.

    PubMed

    Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O

    2015-02-01

    To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care.

  4. Experimental investigation of false positive errors in auditory species occurrence surveys

    USGS Publications Warehouse

    Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.

    2012-01-01

    False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.

  5. Extraction of CT dose information from DICOM metadata: automated Matlab-based approach.

    PubMed

    Dave, Jaydev K; Gingold, Eric L

    2013-01-01

    The purpose of this study was to extract exposure parameters and dose-relevant indexes of CT examinations from information embedded in DICOM metadata. DICOM dose report files were identified and retrieved from a PACS. An automated software program was used to extract from these files information from the structured elements in the DICOM metadata relevant to exposure. Extracting information from DICOM metadata eliminated potential errors inherent in techniques based on optical character recognition, yielding 100% accuracy.

  6. A Subgeneric Classification of the Genus Uranotaenia Lynch Arribalzaga, with a Historical Review and Notes on Other Categories

    DTIC Science & Technology

    1972-01-01

    three species of Pseudoficalbia from New Guinea, While he was correct in his assignment of species, the characters, though they will separate a...and African material:, I have made no attempt to correct these errors, except in the Southeast Asian fauna, In a few cases, I have brought them to...current practice of lumping everything into one supposedly homogeneous genus.” While the statement may ultimately prove correct , I prefer to consider at

  7. Added value in health care with six sigma.

    PubMed

    Lenaz, Maria P

    2004-06-01

    Six sigma is the structured application of the tools and techniques of quality management applied on a project basis that can enable organizations to achieve superior performance and strategic business results. The Greek character sigma has been used as a statistical term that measures how much a process varies from perfection, based on the number of defects per million units. Health care organizations using this model proceed from the lower levels of quality performance to the highest level, in which the process is nearly error free.

  8. Technological Advancements and Error Rates in Radiation Therapy Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margalit, Danielle N., E-mail: dmargalit@partners.org; Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA; Chen, Yu-Hui

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system atmore » Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.« less

  9. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGES

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  10. Implementation of bayesian model averaging on the weather data forecasting applications utilizing open weather map

    NASA Astrophysics Data System (ADS)

    Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.

    2018-02-01

    Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.

  11. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  12. Estimating Rain Rates from Tipping-Bucket Rain Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Fisher, Brad L.; Wolff, David B.

    2007-01-01

    This paper describes the cubic spline based operational system for the generation of the TRMM one-minute rain rate product 2A-56 from Tipping Bucket (TB) gauge measurements. Methodological issues associated with applying the cubic spline to the TB gauge rain rate estimation are closely examined. A simulated TB gauge from a Joss-Waldvogel (JW) disdrometer is employed to evaluate effects of time scales and rain event definitions on errors of the rain rate estimation. The comparison between rain rates measured from the JW disdrometer and those estimated from the simulated TB gauge shows good overall agreement; however, the TB gauge suffers sampling problems, resulting in errors in the rain rate estimation. These errors are very sensitive to the time scale of rain rates. One-minute rain rates suffer substantial errors, especially at low rain rates. When one minute rain rates are averaged to 4-7 minute or longer time scales, the errors dramatically reduce. The rain event duration is very sensitive to the event definition but the event rain total is rather insensitive, provided that the events with less than 1 millimeter rain totals are excluded. Estimated lower rain rates are sensitive to the event definition whereas the higher rates are not. The median relative absolute errors are about 22% and 32% for 1-minute TB rain rates higher and lower than 3 mm per hour, respectively. These errors decrease to 5% and 14% when TB rain rates are used at 7-minute scale. The radar reflectivity-rainrate (Ze-R) distributions drawn from large amount of 7-minute TB rain rates and radar reflectivity data are mostly insensitive to the event definition.

  13. [New studies on the history of anesthesiology--a new study on Seishu Hanaoka's "Nyugan Ckiken Roku" (a surgical experience with breast cancer)].

    PubMed

    Matsuki, A

    2000-09-01

    Among Japanese physicians before the Edo era, Seishu Hanaoka is the most well known even in foreign countries as well as in Japan. His detailed biography is described in a monograph by Shuzo Kure published in 1923 which has been the most important book for the study of Seishu Hanaoka. Hanaoka had worked very hard in various fields as surgeons, educator, poet and community developer. However, his best noted activity was his devotion to the development of oral general anesthetic "Mafutsu-San" or "Tsusen-San". He was the first to succeed in the excision of breast cancer in a 60 year old woman named Kan Aiya under general anesthesia with this agent on Oct 13th, 1804. The details of the case have been known to us, as the manuscript on the case which is believed to be by the hand of Hanaoka is extant in the Tenri Library, Tenri University and the whole manuscripts have been printed in Kure's monograph. For the past twenty years, I have studied carefully the microfilmed manuscript and the printed sentences appeared in Kure's book to find several serious bibliographical errors and dubious points between them. They are as follows. 1) There is no definite proof that the manuscript was transcribed by Seishu Hanaoka himself. This was originally proposed by Shuzo Kure without any rational reasons. 2) There are seven fundamental and unbelievable errors of incorrect use of Chinese characters in the manuscript. These basic errors can not be committed by Hanaoka considering that he was an excellent poet. For these errors Shuzo Kure falsified them to be printed in his book. He even altered Chinese characters in one of the photographs of the manuscript in his book. 3) Shuzo Kure did not exhibit this manuscript at the exhibition on the occasion of 150 anniversary of Seishu Hanaoka's death in Tokyo, supposedly to avoid careful study by other investigators. All above mentioned findings strongly suggest us that the manuscript "Nyugan Chiken Roku" could be transcribed by one of Hanaoka's disciples and not by Hanaoka himself.

  14. Earth elevation map production and high resolution sensing camera imaging analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xiubin; Jin, Guang; Jiang, Li; Dai, Lu; Xu, Kai

    2010-11-01

    The Earth's digital elevation which impacts space camera imaging has prepared and imaging has analysed. Based on matching error that TDI CCD integral series request of the speed of image motion, statistical experimental methods-Monte Carlo method is used to calculate the distribution histogram of Earth's elevation in image motion compensated model which includes satellite attitude changes, orbital angular rate changes, latitude, longitude and the orbital inclination changes. And then, elevation information of the earth's surface from SRTM is read. Earth elevation map which produced for aerospace electronic cameras is compressed and spliced. It can get elevation data from flash according to the shooting point of latitude and longitude. If elevation data between two data, the ways of searching data uses linear interpolation. Linear interpolation can better meet the rugged mountains and hills changing requests. At last, the deviant framework and camera controller are used to test the character of deviant angle errors, TDI CCD camera simulation system with the material point corresponding to imaging point model is used to analyze the imaging's MTF and mutual correlation similarity measure, simulation system use adding cumulation which TDI CCD imaging exceeded the corresponding pixel horizontal and vertical offset to simulate camera imaging when stability of satellite attitude changes. This process is practicality. It can effectively control the camera memory space, and meet a very good precision TDI CCD camera in the request matches the speed of image motion and imaging.

  15. Fast and flexible selection with a single switch.

    PubMed

    Broderick, Tamara; MacKay, David J C

    2009-10-22

    Selection methods that require only a single-switch input, such as a button click or blink, are potentially useful for individuals with motor impairments, mobile technology users, and individuals wishing to transmit information securely. We present a single-switch selection method, "Nomon," that is general and efficient. Existing single-switch selection methods require selectable options to be arranged in ways that limit potential applications. By contrast, traditional operating systems, web browsers, and free-form applications (such as drawing) place options at arbitrary points on the screen. Nomon, however, has the flexibility to select any point on a screen. Nomon adapts automatically to an individual's clicking ability; it allows a person who clicks precisely to make a selection quickly and allows a person who clicks imprecisely more time to make a selection without error. Nomon reaps gains in information rate by allowing the specification of beliefs (priors) about option selection probabilities and by avoiding tree-based selection schemes in favor of direct (posterior) inference. We have developed both a Nomon-based writing application and a drawing application. To evaluate Nomon's performance, we compared the writing application with a popular existing method for single-switch writing (row-column scanning). Novice users wrote 35% faster with the Nomon interface than with the scanning interface. An experienced user (author TB, with 10 hours practice) wrote at speeds of 9.3 words per minute with Nomon, using 1.2 clicks per character and making no errors in the final text.

  16. Multi-locus phylogeny of dolphins in the subfamily Lissodelphininae: character synergy improves phylogenetic resolution

    PubMed Central

    Harlin-Cognato, April D; Honeycutt, Rodney L

    2006-01-01

    Background Dolphins of the genus Lagenorhynchus are anti-tropically distributed in temperate to cool waters. Phylogenetic analyses of cytochrome b sequences have suggested that the genus is polyphyletic; however, many relationships were poorly resolved. In this study, we present a combined-analysis phylogenetic hypothesis for Lagenorhynchus and members of the subfamily Lissodelphininae, which is derived from two nuclear and two mitochondrial data sets and the addition of 34 individuals representing 9 species. In addition, we characterize with parsimony and Bayesian analyses the phylogenetic utility and interaction of characters with statistical measures, including the utility of highly consistent (non-homoplasious) characters as a conservative measure of phylogenetic robustness. We also explore the effects of removing sources of character conflict on phylogenetic resolution. Results Overall, our study provides strong support for the monophyly of the subfamily Lissodelphininae and the polyphyly of the genus Lagenorhynchus. In addition, the simultaneous parsimony analysis resolved and/or improved resolution for 12 nodes including: (1) L. albirostris, L. acutus; (2) L. obscurus and L. obliquidens; and (3) L. cruciger and L. australis. In addition, the Bayesian analysis supported the monophyly of the Cephalorhynchus, and resolved ambiguities regarding the relationship of L. australis/L. cruciger to other members of the genus Lagenorhynchus. The frequency of highly consistent characters varied among data partitions, but the rate of evolution was consistent within data partitions. Although the control region was the greatest source of character conflict, removal of this data partition impeded phylogenetic resolution. Conclusion The simultaneous analysis approach produced a more robust phylogenetic hypothesis for Lagenorhynchus than previous studies, thus supporting a phylogenetic approach employing multiple data partitions that vary in overall rate of evolution. Even in cases where there was apparent conflict among characters, our data suggest a synergistic interaction in the simultaneous analysis, and speak against a priori exclusion of data because of potential conflicts, primarily because phylogenetic results can be less robust. For example, the removal of the control region, the putative source of character conflict, produced spurious results with inconsistencies among and within topologies from parsimony and Bayesian analyses. PMID:17078887

  17. A psycholinguistic database for traditional Chinese character naming.

    PubMed

    Chang, Ya-Ning; Hsu, Chun-Hsien; Tsai, Jie-Li; Chen, Chien-Liang; Lee, Chia-Ying

    2016-03-01

    In this study, we aimed to provide a large-scale set of psycholinguistic norms for 3,314 traditional Chinese characters, along with their naming reaction times (RTs), collected from 140 Chinese speakers. The lexical and semantic variables in the database include frequency, regularity, familiarity, consistency, number of strokes, homophone density, semantic ambiguity rating, phonetic combinability, semantic combinability, and the number of disyllabic compound words formed by a character. Multiple regression analyses were conducted to examine the predictive powers of these variables for the naming RTs. The results demonstrated that these variables could account for a significant portion of variance (55.8%) in the naming RTs. An additional multiple regression analysis was conducted to demonstrate the effects of consistency and character frequency. Overall, the regression results were consistent with the findings of previous studies on Chinese character naming. This database should be useful for research into Chinese language processing, Chinese education, or cross-linguistic comparisons. The database can be accessed via an online inquiry system (http://ball.ling.sinica.edu.tw/namingdatabase/index.html).

  18. Approximation of Bit Error Rates in Digital Communications

    DTIC Science & Technology

    2007-06-01

    and Technology Organisation DSTO—TN—0761 ABSTRACT This report investigates the estimation of bit error rates in digital communi- cations, motivated by...recent work in [6]. In the latter, bounds are used to construct estimates for bit error rates in the case of differentially coherent quadrature phase

  19. Analysis of the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery☆

    PubMed Central

    Arba-Mosquera, Samuel; Aslanides, Ioannis M.

    2012-01-01

    Purpose To analyze the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery. Methods A comprehensive model, which directly considers eye movements, including saccades, vestibular, optokinetic, vergence, and miniature, as well as, eye-tracker acquisition rate, eye-tracker latency time, scanner positioning time, laser firing rate, and laser trigger delay have been developed. Results Eye-tracker acquisition rates below 100 Hz correspond to pulse positioning errors above 1.5 mm. Eye-tracker latency times to about 15 ms correspond to pulse positioning errors of up to 3.5 mm. Scanner positioning times to about 9 ms correspond to pulse positioning errors of up to 2 mm. Laser firing rates faster than eye-tracker acquisition rates basically duplicate pulse-positioning errors. Laser trigger delays to about 300 μs have minor to no impact on pulse-positioning errors. Conclusions The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.

  20. Failure analysis and modeling of a multicomputer system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Subramani, Sujatha Srinivasan

    1990-01-01

    This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).

  1. What limits the morphological disparity of clades?

    PubMed Central

    Oyston, Jack W.; Hughes, Martin; Wagner, Peter J.; Gerber, Sylvain; Wills, Matthew A.

    2015-01-01

    The morphological disparity of species within major clades shows a variety of trajectory patterns through evolutionary time. However, there is a significant tendency for groups to reach their maximum disparity relatively early in their histories, even while their species richness or diversity is comparatively low. This pattern of early high-disparity suggests that there are internal constraints (e.g. developmental pleiotropy) or external restrictions (e.g. ecological competition) upon the variety of morphologies that can subsequently evolve. It has also been demonstrated that the rate of evolution of new character states decreases in most clades through time (character saturation), as does the rate of origination of novel bodyplans and higher taxa. Here, we tested whether there was a simple relationship between the level or rate of character state exhaustion and the shape of a clade's disparity profile: specifically, its centre of gravity (CG). In a sample of 93 extinct major clades, most showed some degree of exhaustion, but all continued to evolve new states up until their extinction. Projection of states/steps curves suggested that clades realized an average of 60% of their inferred maximum numbers of states. Despite a weak but significant correlation between overall levels of homoplasy and the CG of clade disparity profiles, there were no significant relationships between any of our indices of exhaustion curve shape and the clade disparity CG. Clades showing early high-disparity were no more likely to have early character saturation than those with maximum disparity late in their evolution. PMID:26640649

  2. Research on the optoacoustic communication system for speech transmission by variable laser-pulse repetition rates

    NASA Astrophysics Data System (ADS)

    Jiang, Hongyan; Qiu, Hongbing; He, Ning; Liao, Xin

    2018-06-01

    For the optoacoustic communication from in-air platforms to submerged apparatus, a method based on speech recognition and variable laser-pulse repetition rates is proposed, which realizes character encoding and transmission for speech. Firstly, the theories and spectrum characteristics of the laser-generated underwater sound are analyzed; and moreover character conversion and encoding for speech as well as the pattern of codes for laser modulation is studied; lastly experiments to verify the system design are carried out. Results show that the optoacoustic system, where laser modulation is controlled by speech-to-character baseband codes, is beneficial to improve flexibility in receiving location for underwater targets as well as real-time performance in information transmission. In the overwater transmitter, a pulse laser is controlled to radiate by speech signals with several repetition rates randomly selected in the range of one to fifty Hz, and then in the underwater receiver laser pulse repetition rate and data can be acquired by the preamble and information codes of the corresponding laser-generated sound. When the energy of the laser pulse is appropriate, real-time transmission for speaker-independent speech can be realized in that way, which solves the problem of underwater bandwidth resource and provides a technical approach for the air-sea communication.

  3. Comparative study of biological and technological characters in three generations of silkworm Bombyx mori L. ameiotic, parthenogenetically cloned lines.

    PubMed

    Greiss, H; Vassilieva, J; Petkov, N; Petkov, Z

    2004-11-01

    Detect any deviation in biologic and technologic characters of eight ameiotic-parthenogenetically cloned lines of Bombyx mori L. from different origins from a normal sexually reproduced control line in three generations. Comparative study of the three generations was conducted in SES, Vratza, unit of the National Center for Agrarian Sciences of Bulgaria after fixing all environmental rearing conditions. The ameiotic-parthen-clones displayed good parthenogenetic development, although total hatchability was significantly less than the sexually reproducing control populations. Survival rates between clones and control were not significantly different. All clones displayed significantly longer larval periods. Slight decline in second generation, and a steeper one in the third generation were observed for all eight cloned lines in cocoon weight, shell weight, and shell ratio and these differences were statistically significant. Cocoon yield was significantly lower than the control throughout the three generations. Our parthen-cloning method has a high rate of success in comparison to other cloning methods, the cloned progeny populations although were weaker technologically (cocoon weight, shell weight, and shell ratio), the biological characters (parthenogenetic development and survival rate) were not compromised. Further study is needed to determine the thermal needs of the cloned embryos and metabolic rate of all stages.

  4. Angular Rate Optimal Design for the Rotary Strapdown Inertial Navigation System

    PubMed Central

    Yu, Fei; Sun, Qian

    2014-01-01

    Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS. PMID:24759115

  5. Reverse Transcription Errors and RNA-DNA Differences at Short Tandem Repeats.

    PubMed

    Fungtammasan, Arkarachai; Tomaszkiewicz, Marta; Campos-Sánchez, Rebeca; Eckert, Kristin A; DeGiorgio, Michael; Makova, Kateryna D

    2016-10-01

    Transcript variation has important implications for organismal function in health and disease. Most transcriptome studies focus on assessing variation in gene expression levels and isoform representation. Variation at the level of transcript sequence is caused by RNA editing and transcription errors, and leads to nongenetically encoded transcript variants, or RNA-DNA differences (RDDs). Such variation has been understudied, in part because its detection is obscured by reverse transcription (RT) and sequencing errors. It has only been evaluated for intertranscript base substitution differences. Here, we investigated transcript sequence variation for short tandem repeats (STRs). We developed the first maximum-likelihood estimator (MLE) to infer RT error and RDD rates, taking next generation sequencing error rates into account. Using the MLE, we empirically evaluated RT error and RDD rates for STRs in a large-scale DNA and RNA replicated sequencing experiment conducted in a primate species. The RT error rates increased exponentially with STR length and were biased toward expansions. The RDD rates were approximately 1 order of magnitude lower than the RT error rates. The RT error rates estimated with the MLE from a primate data set were concordant with those estimated with an independent method, barcoded RNA sequencing, from a Caenorhabditis elegans data set. Our results have important implications for medical genomics, as STR allelic variation is associated with >40 diseases. STR nonallelic transcript variation can also contribute to disease phenotype. The MLE and empirical rates presented here can be used to evaluate the probability of disease-associated transcripts arising due to RDD. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  6. Analysis and Compensation of Modulation Angular Rate Error Based on Missile-Borne Rotation Semi-Strapdown Inertial Navigation System.

    PubMed

    Zhang, Jiayu; Li, Jie; Zhang, Xi; Che, Xiaorui; Huang, Yugang; Feng, Kaiqiang

    2018-05-04

    The Semi-Strapdown Inertial Navigation System (SSINS) provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS) inertial measurement unit (MIMU) outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS) is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions.

  7. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  8. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....102 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  9. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  10. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  11. Impact of an antiretroviral stewardship strategy on medication error rates.

    PubMed

    Shea, Katherine M; Hobbs, Athena Lv; Shumake, Jason D; Templet, Derek J; Padilla-Tolentino, Eimeira; Mondy, Kristin E

    2018-05-02

    The impact of an antiretroviral stewardship strategy on medication error rates was evaluated. This single-center, retrospective, comparative cohort study included patients at least 18 years of age infected with human immunodeficiency virus (HIV) who were receiving antiretrovirals and admitted to the hospital. A multicomponent approach was developed and implemented and included modifications to the order-entry and verification system, pharmacist education, and a pharmacist-led antiretroviral therapy checklist. Pharmacists performed prospective audits using the checklist at the time of order verification. To assess the impact of the intervention, a retrospective review was performed before and after implementation to assess antiretroviral errors. Totals of 208 and 24 errors were identified before and after the intervention, respectively, resulting in a significant reduction in the overall error rate ( p < 0.001). In the postintervention group, significantly lower medication error rates were found in both patient admissions containing at least 1 medication error ( p < 0.001) and those with 2 or more errors ( p < 0.001). Significant reductions were also identified in each error type, including incorrect/incomplete medication regimen, incorrect dosing regimen, incorrect renal dose adjustment, incorrect administration, and the presence of a major drug-drug interaction. A regression tree selected ritonavir as the only specific medication that best predicted more errors preintervention ( p < 0.001); however, no antiretrovirals reliably predicted errors postintervention. An antiretroviral stewardship strategy for hospitalized HIV patients including prospective audit by staff pharmacists through use of an antiretroviral medication therapy checklist at the time of order verification decreased error rates. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  12. Seatbelt and helmet depiction on the big screen blockbuster injury prevention messages?

    PubMed

    Cowan, John A; Dubosh, Nicole; Hadley, Craig

    2009-03-01

    Injuries from vehicle crashes are a major cause of death among American youth. Many of these injuries are worsened because of noncompliant safety practices. Messages delivered by mass media are omnipresent in young peoples' lives and influence their behavior patterns. In this investigation, we analyzed seat belt and helmet messages from a sample of top-grossing motion pictures with emphasis on scene context and character demographics. Content analysis of 50 top-grossing motion pictures for years 2000 to 2004, with coding for seat belt and helmet usage by trained media coders. In 48 of 50 movies (53% PG-13; 33% R; 10% PG; 4% G) with vehicle scenes, 518 scenes (82% car/truck; 7% taxi/limo; 7% motorcycle; 4% bicycle/skateboard) were coded. Overall, seat belt and helmet usage rates were 15.4% and 33.3%, respectively, with verbal indications for seat belt or helmet use found in 1.0% of scenes. Safety compliance rates varied by character race (18.3% white; 6.5% black; p = 0.036). No differences in compliance rates were noted for high-speed or unsafe vehicle operation. The injury rate for noncompliant characters involved in crashes was 10.7%. A regression model demonstrated black character race and escape scenes most predictive of noncompliant safety behavior. Safety compliance messages and images are starkly absent in top-grossing motion pictures resulting in, at worst, a deleterious effect on vulnerable populations and public health initiatives, and, at minimum, a lost opportunity to prevent injury and death. Healthcare providers should call on the motion picture industry to improve safety compliance messages and images in their products delivered for mass consumption.

  13. When do latent class models overstate accuracy for diagnostic and other classifiers in the absence of a gold standard?

    PubMed

    Spencer, Bruce D

    2012-06-01

    Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.

  14. Estimating gene gain and loss rates in the presence of error in genome assembly and annotation using CAFE 3.

    PubMed

    Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W

    2013-08-01

    Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.

  15. Derivation of an analytic expression for the error associated with the noise reduction rating

    NASA Astrophysics Data System (ADS)

    Murphy, William J.

    2005-04-01

    Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.

  16. Errors in laboratory medicine: practical lessons to improve patient safety.

    PubMed

    Howanitz, Peter J

    2005-10-01

    Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification, specimen acceptability, proficiency testing, critical value reporting, blood product wastage, and blood culture contamination. Error rate benchmarks for these performance measures were cited and recommendations for improving patient safety presented. Not only has each of the 8 performance measures proven practical, useful, and important for patient care, taken together, they also fulfill regulatory requirements. All laboratories should consider implementing these performance measures and standardizing their own scientific designs, data analysis, and error reduction strategies according to findings from these published studies.

  17. Multi-exemplar affinity propagation.

    PubMed

    Wang, Chang-Dong; Lai, Jian-Huang; Suen, Ching Y; Zhu, Jun-Yong

    2013-09-01

    The affinity propagation (AP) clustering algorithm has received much attention in the past few years. AP is appealing because it is efficient, insensitive to initialization, and it produces clusters at a lower error rate than other exemplar-based methods. However, its single-exemplar model becomes inadequate when applied to model multisubclasses in some situations such as scene analysis and character recognition. To remedy this deficiency, we have extended the single-exemplar model to a multi-exemplar one to create a new multi-exemplar affinity propagation (MEAP) algorithm. This new model automatically determines the number of exemplars in each cluster associated with a super exemplar to approximate the subclasses in the category. Solving the model is NP-hard and we tackle it with the max-sum belief propagation to produce neighborhood maximum clusters, with no need to specify beforehand the number of clusters, multi-exemplars, and superexemplars. Also, utilizing the sparsity in the data, we are able to reduce substantially the computational time and storage. Experimental studies have shown MEAP's significant improvements over other algorithms on unsupervised image categorization and the clustering of handwritten digits.

  18. The statistical validity of nursing home survey findings.

    PubMed

    Woolley, Douglas C

    2011-11-01

    The Medicare nursing home survey is a high-stakes process whose findings greatly affect nursing homes, their current and potential residents, and the communities they serve. Therefore, survey findings must achieve high validity. This study looked at the validity of one key assessment made during a nursing home survey: the observation of the rate of errors in administration of medications to residents (med-pass). Statistical analysis of the case under study and of alternative hypothetical cases. A skilled nursing home affiliated with a local medical school. The nursing home administrators and the medical director. Observational study. The probability that state nursing home surveyors make a Type I or Type II error in observing med-pass error rates, based on the current case and on a series of postulated med-pass error rates. In the common situation such as our case, where med-pass errors occur at slightly above a 5% rate after 50 observations, and therefore trigger a citation, the chance that the true rate remains above 5% after a large number of observations is just above 50%. If the true med-pass error rate were as high as 10%, and the survey team wished to achieve 75% accuracy in determining that a citation was appropriate, they would have to make more than 200 med-pass observations. In the more common situation where med pass errors are closer to 5%, the team would have to observe more than 2000 med-passes to achieve even a modest 75% accuracy in their determinations. In settings where error rates are low, large numbers of observations of an activity must be made to reach acceptable validity of estimates for the true rates of errors. In observing key nursing home functions with current methodology, the State Medicare nursing home survey process does not adhere to well-known principles of valid error determination. Alternate approaches in survey methodology are discussed. Copyright © 2011 American Medical Directors Association. Published by Elsevier Inc. All rights reserved.

  19. How does aging affect the types of error made in a visual short-term memory ‘object-recall’ task?

    PubMed Central

    Sapkota, Raju P.; van der Linde, Ian; Pardhan, Shahina

    2015-01-01

    This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits. PMID:25653615

  20. How does aging affect the types of error made in a visual short-term memory 'object-recall' task?

    PubMed

    Sapkota, Raju P; van der Linde, Ian; Pardhan, Shahina

    2014-01-01

    This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits.

  1. Clinical biochemistry laboratory rejection rates due to various types of preanalytical errors.

    PubMed

    Atay, Aysenur; Demir, Leyla; Cuhadar, Serap; Saglam, Gulcan; Unal, Hulya; Aksun, Saliha; Arslan, Banu; Ozkan, Asuman; Sutcu, Recep

    2014-01-01

    Preanalytical errors, along the process from the beginning of test requests to the admissions of the specimens to the laboratory, cause the rejection of samples. The aim of this study was to better explain the reasons of rejected samples, regarding to their rates in certain test groups in our laboratory. This preliminary study was designed on the rejected samples in one-year period, based on the rates and types of inappropriateness. Test requests and blood samples of clinical chemistry, immunoassay, hematology, glycated hemoglobin, coagulation and erythrocyte sedimentation rate test units were evaluated. Types of inappropriateness were evaluated as follows: improperly labelled samples, hemolysed, clotted specimen, insufficient volume of specimen and total request errors. A total of 5,183,582 test requests from 1,035,743 blood collection tubes were considered. The total rejection rate was 0.65 %. The rejection rate of coagulation group was significantly higher (2.28%) than the other test groups (P < 0.001) including insufficient volume of specimen error rate as 1.38%. Rejection rates of hemolysis, clotted specimen and insufficient volume of sample error were found to be 8%, 24% and 34%, respectively. Total request errors, particularly, for unintelligible requests were 32% of the total for inpatients. The errors were especially attributable to unintelligible requests of inappropriate test requests, improperly labelled samples for inpatients and blood drawing errors especially due to insufficient volume of specimens in a coagulation test group. Further studies should be performed after corrective and preventive actions to detect a possible decrease in rejecting samples.

  2. Adaptive error detection for HDR/PDR brachytherapy: Guidance for decision making during real-time in vivo point dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk

    Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less

  3. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  4. Medication errors as malpractice-a qualitative content analysis of 585 medication errors by nurses in Sweden.

    PubMed

    Björkstén, Karin Sparring; Bergqvist, Monica; Andersén-Karlsson, Eva; Benson, Lina; Ulfvarson, Johanna

    2016-08-24

    Many studies address the prevalence of medication errors but few address medication errors serious enough to be regarded as malpractice. Other studies have analyzed the individual and system contributory factor leading to a medication error. Nurses have a key role in medication administration, and there are contradictory reports on the nurses' work experience in relation to the risk and type for medication errors. All medication errors where a nurse was held responsible for malpractice (n = 585) during 11 years in Sweden were included. A qualitative content analysis and classification according to the type and the individual and system contributory factors was made. In order to test for possible differences between nurses' work experience and associations within and between the errors and contributory factors, Fisher's exact test was used, and Cohen's kappa (k) was performed to estimate the magnitude and direction of the associations. There were a total of 613 medication errors in the 585 cases, the most common being "Wrong dose" (41 %), "Wrong patient" (13 %) and "Omission of drug" (12 %). In 95 % of the cases, an average of 1.4 individual contributory factors was found; the most common being "Negligence, forgetfulness or lack of attentiveness" (68 %), "Proper protocol not followed" (25 %), "Lack of knowledge" (13 %) and "Practice beyond scope" (12 %). In 78 % of the cases, an average of 1.7 system contributory factors was found; the most common being "Role overload" (36 %), "Unclear communication or orders" (30 %) and "Lack of adequate access to guidelines or unclear organisational routines" (30 %). The errors "Wrong patient due to mix-up of patients" and "Wrong route" and the contributory factors "Lack of knowledge" and "Negligence, forgetfulness or lack of attentiveness" were more common in less experienced nurses. The experienced nurses were more prone to "Practice beyond scope of practice" and to make errors in spite of "Lack of adequate access to guidelines or unclear organisational routines". Medication errors regarded as malpractice in Sweden were of the same character as medication errors worldwide. A complex interplay between individual and system factors often contributed to the errors.

  5. 7 CFR 275.23 - Determination of State agency program performance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM PERFORMANCE REPORTING... section, the adjusted regressed payment error rate shall be calculated to yield the State agency's payment error rate. The adjusted regressed payment error rate is given by r 1″ + r 2″. (ii) If FNS determines...

  6. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations.

    PubMed

    Derks, E M; Zwinderman, A H; Gamazon, E R

    2017-05-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.

  7. Improving the quality of cognitive screening assessments: ACEmobile, an iPad-based version of the Addenbrooke's Cognitive Examination-III.

    PubMed

    Newman, Craig G J; Bevins, Adam D; Zajicek, John P; Hodges, John R; Vuillermoz, Emil; Dickenson, Jennifer M; Kelly, Denise S; Brown, Simona; Noad, Rupert F

    2018-01-01

    Ensuring reliable administration and reporting of cognitive screening tests are fundamental in establishing good clinical practice and research. This study captured the rate and type of errors in clinical practice, using the Addenbrooke's Cognitive Examination-III (ACE-III), and then the reduction in error rate using a computerized alternative, the ACEmobile app. In study 1, we evaluated ACE-III assessments completed in National Health Service (NHS) clinics ( n  = 87) for administrator error. In study 2, ACEmobile and ACE-III were then evaluated for their ability to capture accurate measurement. In study 1, 78% of clinically administered ACE-IIIs were either scored incorrectly or had arithmetical errors. In study 2, error rates seen in the ACE-III were reduced by 85%-93% using ACEmobile. Error rates are ubiquitous in routine clinical use of cognitive screening tests and the ACE-III. ACEmobile provides a framework for supporting reduced administration, scoring, and arithmetical error during cognitive screening.

  8. Virtual Viewing Time: The Relationship between Presence and Sexual Interest in Androphilic and Gynephilic Men

    PubMed Central

    Fromberger, Peter; Meyer, Sabrina; Kempf, Christina; Jordan, Kirsten; Müller, Jürgen L.

    2015-01-01

    Virtual Reality (VR) has successfully been used in the research of human behavior for more than twenty years. The main advantage of VR is its capability to induce a high sense of presence. This results in emotions and behavior which are very close to those shown in real situations. In the context of sex research, only a few studies have used high-immersive VR so far. The ones that did can be found mostly in the field of forensic psychology. Nevertheless, the relationship between presence and sexual interest still remains unclear. The present study is the first to examine the advantages of high-immersive VR in comparison to a conventional standard desktop system regarding their capability to measure sexual interest. 25 gynephilic and 20 androphilic healthy men underwent three experimental conditions, which differed in their ability to induce a sense of presence. In each condition, participants were asked to rate ten male and ten female virtual human characters regarding their sexual attractiveness. Without their knowledge, the subjects’ viewing time was assessed throughout the rating. Subjects were then asked to rate the sense of presence they had experienced as well as their perceived realism of the characters. Results suggested that stereoscopic viewing can significantly enhance the subjective sexual attractiveness of sexually relevant characters. Furthermore, in all three conditions participants looked significantly longer at sexually relevant virtual characters than at sexually non-relevant ones. The high immersion condition provided the best discriminant validity. From a statistical point of view, however, the sense of presence had no significant influence on the discriminant validity of the viewing time task. The study showed that high-immersive virtual environments enhance realism ratings as well as ratings of sexual attractiveness of three-dimensional human stimuli in comparison to standard desktop systems. Results also show that viewing time seems to be influenced neither by sexual attractiveness nor by realism of stimuli. This indicates how important task specific mechanisms of the viewing time effect are. PMID:25992790

  9. Social Anxiety Modulates Subliminal Affective Priming

    PubMed Central

    Paul, Elizabeth S.; Pope, Stuart A. J.; Fennell, John G.; Mendl, Michael T.

    2012-01-01

    Background It is well established that there is anxiety-related variation between observers in the very earliest, pre-attentive stage of visual processing of images such as emotionally expressive faces, often leading to enhanced attention to threat in a variety of disorders and traits. Whether there is also variation in early-stage affective (i.e. valenced) responses resulting from such images, however, is not yet known. The present study used the subliminal affective priming paradigm to investigate whether people varying in trait social anxiety also differ in their affective responses to very briefly presented, emotionally expressive face images. Methodology/Principal Findings Participants (n = 67) completed a subliminal affective priming task, in which briefly presented and smiling, neutral and angry faces were shown for 10 ms durations (below objective and subjective thresholds for visual discrimination), and immediately followed by a randomly selected Chinese character mask (2000 ms). Ratings of participants' liking for each Chinese character indicated the degree of valenced affective response made to the unseen emotive images. Participants' ratings of their liking for the Chinese characters were significantly influenced by the type of face image preceding them, with smiling faces generating more positive ratings than neutral and angry ones (F(2,128) = 3.107, p<0.05). Self-reported social anxiety was positively correlated with ratings of smiling relative to neutral-face primed characters (Pearson's r = .323, p<0.01). Individual variation in self-reported mood awareness was not associated with ratings. Conclusions Trait social anxiety is associated with individual variation in affective responding, even in response to the earliest, pre-attentive stage of visual image processing. However, the fact that these priming effects are limited to smiling and not angry (i.e. threatening) images leads us to propose that the pre-attentive processes involved in generating the subliminal affective priming effect may be different from those that generate attentional biases in anxious individuals. PMID:22615873

  10. Social anxiety modulates subliminal affective priming.

    PubMed

    Paul, Elizabeth S; Pope, Stuart A J; Fennell, John G; Mendl, Michael T

    2012-01-01

    It is well established that there is anxiety-related variation between observers in the very earliest, pre-attentive stage of visual processing of images such as emotionally expressive faces, often leading to enhanced attention to threat in a variety of disorders and traits. Whether there is also variation in early-stage affective (i.e. valenced) responses resulting from such images, however, is not yet known. The present study used the subliminal affective priming paradigm to investigate whether people varying in trait social anxiety also differ in their affective responses to very briefly presented, emotionally expressive face images. Participants (n = 67) completed a subliminal affective priming task, in which briefly presented and smiling, neutral and angry faces were shown for 10 ms durations (below objective and subjective thresholds for visual discrimination), and immediately followed by a randomly selected Chinese character mask (2000 ms). Ratings of participants' liking for each Chinese character indicated the degree of valenced affective response made to the unseen emotive images. Participants' ratings of their liking for the Chinese characters were significantly influenced by the type of face image preceding them, with smiling faces generating more positive ratings than neutral and angry ones (F(2,128) = 3.107, p<0.05). Self-reported social anxiety was positively correlated with ratings of smiling relative to neutral-face primed characters (Pearson's r = .323, p<0.01). Individual variation in self-reported mood awareness was not associated with ratings. Trait social anxiety is associated with individual variation in affective responding, even in response to the earliest, pre-attentive stage of visual image processing. However, the fact that these priming effects are limited to smiling and not angry (i.e. threatening) images leads us to propose that the pre-attentive processes involved in generating the subliminal affective priming effect may be different from those that generate attentional biases in anxious individuals.

  11. Documentation of study medication dispensing in a prospective large randomized clinical trial: experiences from the ARISTOTLE Trial.

    PubMed

    Alexander, John H; Levy, Elliott; Lawrence, Jack; Hanna, Michael; Waclawski, Anthony P; Wang, Junyuan; Califf, Robert M; Wallentin, Lars; Granger, Christopher B

    2013-09-01

    In ARISTOTLE, apixaban resulted in a 21% reduction in stroke, a 31% reduction in major bleeding, and an 11% reduction in death. However, approval of apixaban was delayed to investigate a statement in the clinical study report that "7.3% of subjects in the apixaban group and 1.2% of subjects in the warfarin group received, at some point during the study, a container of the wrong type." Rates of study medication dispensing error were characterized through reviews of study medication container tear-off labels in 6,520 participants from randomly selected study sites. The potential effect of dispensing errors on study outcomes was statistically simulated in sensitivity analyses in the overall population. The rate of medication dispensing error resulting in treatment error was 0.04%. Rates of participants receiving at least 1 incorrect container were 1.04% (34/3,273) in the apixaban group and 0.77% (25/3,247) in the warfarin group. Most of the originally reported errors were data entry errors in which the correct medication container was dispensed but the wrong container number was entered into the case report form. Sensitivity simulations in the overall trial population showed no meaningful effect of medication dispensing error on the main efficacy and safety outcomes. Rates of medication dispensing error were low and balanced between treatment groups. The initially reported dispensing error rate was the result of data recording and data management errors and not true medication dispensing errors. These analyses confirm the previously reported results of ARISTOTLE. © 2013.

  12. Propagation of stage measurement uncertainties to streamflow time series

    NASA Astrophysics Data System (ADS)

    Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary

    2016-04-01

    Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.

  13. Prescribing errors during hospital inpatient care: factors influencing identification by pharmacists.

    PubMed

    Tully, Mary P; Buchan, Iain E

    2009-12-01

    To investigate the prevalence of prescribing errors identified by pharmacists in hospital inpatients and the factors influencing error identification rates by pharmacists throughout hospital admission. 880-bed university teaching hospital in North-west England. Data about prescribing errors identified by pharmacists (median: 9 (range 4-17) collecting data per day) when conducting routine work were prospectively recorded on 38 randomly selected days over 18 months. Proportion of new medication orders in which an error was identified; predictors of error identification rate, adjusted for workload and seniority of pharmacist, day of week, type of ward or stage of patient admission. 33,012 new medication orders were reviewed for 5,199 patients; 3,455 errors (in 10.5% of orders) were identified for 2,040 patients (39.2%; median 1, range 1-12). Most were problem orders (1,456, 42.1%) or potentially significant errors (1,748, 50.6%); 197 (5.7%) were potentially serious; 1.6% (n = 54) were potentially severe or fatal. Errors were 41% (CI: 28-56%) more likely to be identified at patient's admission than at other times, independent of confounders. Workload was the strongest predictor of error identification rates, with 40% (33-46%) less errors identified on the busiest days than at other times. Errors identified fell by 1.9% (1.5-2.3%) for every additional chart checked, independent of confounders. Pharmacists routinely identify errors but increasing workload may reduce identification rates. Where resources are limited, they may be better spent on identifying and addressing errors immediately after admission to hospital.

  14. Resampling-Based Empirical Bayes Multiple Testing Procedures for Controlling Generalized Tail Probability and Expected Value Error Rates: Focus on the False Discovery Rate and Simulation Study

    PubMed Central

    Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.

    2014-01-01

    Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138

  15. Microgravity Foam Structure and Rheology

    NASA Technical Reports Server (NTRS)

    Durian, Douglas J.

    1996-01-01

    The objective of this research was to exploit rheological and multiple-light scattering techniques, and ultimately microgravity conditions, in order to quantify and elucidate the unusual elastic character of foams in terms of their underlying microscopic structure and dynamics. Special interest was in determining how this elastic character vanishes, i.e. how the foam melts into a simple viscous liquid, as a function of both increasing liquid content and shear strain rate.

  16. Character and dealing with laughter: the relation of self- and peer-reported strengths of character with gelotophobia, gelotophilia, and katagelasticism.

    PubMed

    Proyer, René T; Wellenzohn, Sara; Ruch, Willibald

    2014-01-01

    We hypothesized that gelotophobia (the fear of being laughed at), gelotophilia (the joy of being laughed at), and katagelasticism (the joy of laughing at others) relate differently to character strengths. In Study 1 (N = 5,134), self-assessed gelotophobia was primarily negatively related to strengths (especially to lower hope, zest, and love), whereas only modesty yielded positive relations. Gelotophilia demonstrated mainly positive relations with humor, zest, and social intelligence. Katagelasticism existed widely unrelated from character strengths with humor demonstrating the comparatively highest coefficients. Study 2 consisted of N = 249 participants who provided self- and peer-ratings of strengths and self-reports on the three dispositions. The results converged well with those from Study 1. When comparing self- and peer-reports, those higher in gelotophobia under-estimated and those higher in gelotophilia over-estimated their virtuousness, whereas those higher in katagelasticism seemed to have a realistic appraisal of their strengths. Peer-rated (low) hope and modesty contributed to the prediction of gelotophobia beyond self-reports. The same was true for low modesty, creativity, low bravery, and authenticity for gelotophilia and for low love of learning regarding katagelasticism. Results suggest that there is a stable relation between the way people deal with ridicule and laughing and their virtuousness.

  17. Topological quantum computing with a very noisy network and local error rates approaching one percent.

    PubMed

    Nickerson, Naomi H; Li, Ying; Benjamin, Simon C

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.

  18. Analysis and Compensation of Modulation Angular Rate Error Based on Missile-Borne Rotation Semi-Strapdown Inertial Navigation System

    PubMed Central

    Zhang, Jiayu; Li, Jie; Zhang, Xi; Che, Xiaorui; Huang, Yugang; Feng, Kaiqiang

    2018-01-01

    The Semi-Strapdown Inertial Navigation System (SSINS) provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS) inertial measurement unit (MIMU) outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS) is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions. PMID:29734707

  19. Nonmonotonic variation of seawater [sup 87]Sr/[sup 86]Sr across the Ivorian/Chadian boundary (Mississippian, Osagean): Evidence from marine cements within the Irish Waulsortian Limestone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Douthit, T.L.; Meyers, W.J.; Hanson, G.N.

    1993-05-01

    Detailed analysis of compositionally unaltered marine fibrous cements (MFC) from a single core through the Mississippian irish Waulsortian Limestone indicates that the variation of seawater [sup 87]Sr/[sup 86]Sr is nonmonotonic across the Ivorian-Chadian boundary. This nonmonotonic variation has not been recognized by previous studies. Furthermore, marine cement yielded [sup 87]Sr/[sup 86]Sr ratios lower than previously reported values for the Ivorian-Chadian (sagean). Marine fibrous cements are interpreted to be compositionally unaltered on the basis of nonluminescent character and stable isotope (C, O) composition comparable to previous estimates of Mississippian marine calcite. The isotope chemistry (C, O, Sr) and cathodoluminescent character ofmore » the marine fibrous cements therefore remained intact during their conversion from high-Mg calcite to low-Mg calcite + microdolomite, a conversion that probably took place in marine water during precipitation of Zone 1 calcite cement, the oldest non-MFC cement. High stratigraphic resolution was obtained by restricting the sample set to a single core, 429 m long, thereby eliminating chronostratigraphic correlation errors. The core is estimated to represent about 9.8 million years of Waulsortian Limestone deposition. The maximum rate of change in seawater [sup 87]Sr/[sup 86]Sr is [minus]0.00012/Ma, comparable in magnitude to Tertiary values. The authors data document the presence of fine-scale seawater [sup 87]Sr/[sup 86]Sr modulations for the Ivorian/Chadian, in contrast to the previously published monotonic seawater [sup 87]Sr/[sup 86]Sr curve for this interval, and emphasize the importance of well characterized intraformational isotopic baselines.« less

  20. Trends of sexual and violent content by gender in top-grossing U.S. films, 1950-2006.

    PubMed

    Bleakley, Amy; Jamieson, Patrick E; Romer, Daniel

    2012-07-01

    Because popular media such as movies can both reflect and contribute to changes in cultural norms and values, we examined gender differences and trends in the portrayal of sexual and violent content in top-grossing films from 1950 to 2006. The sample included 855 of the top-grossing films released over 57 years, from 1950 to 2006. The number of female and male main characters and their involvement in sexual and violent behavior were coded and analyzed over time. The relationships between sexual and violent behavior within films were also assessed. The average number of male and female main characters in films has remained stable over time, with male characters outnumbering female characters by more than two to one. Female characters were twice as likely as male characters to be involved in sex, with differences in more explicit sex growing over time. Violence has steadily increased for both male and female characters. Although women continue to be underrepresented in films, their disproportionate portrayal in more explicit sexual content has grown over time. Their portrayal in violent roles has also grown, but at the same rate as men. Implications of exposure to these trends among young movie-going men and women are discussed. Copyright © 2012 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  1. Accuracy of cited “facts” in medical research articles: A review of study methodology and recalculation of quotation error rate

    PubMed Central

    2017-01-01

    Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or “facts,” are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval). PMID:28910404

  2. Accuracy of cited "facts" in medical research articles: A review of study methodology and recalculation of quotation error rate.

    PubMed

    Mogull, Scott A

    2017-01-01

    Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or "facts," are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval).

  3. The Relationship between Occurrence Timing of Dispensing Errors and Subsequent Danger to Patients under the Situation According to the Classification of Drugs by Efficacy.

    PubMed

    Tsuji, Toshikazu; Nagata, Kenichiro; Kawashiri, Takehiro; Yamada, Takaaki; Irisa, Toshihiro; Murakami, Yuko; Kanaya, Akiko; Egashira, Nobuaki; Masuda, Satohiro

    2016-01-01

    There are many reports regarding various medical institutions' attempts at the prevention of dispensing errors. However, the relationship between occurrence timing of dispensing errors and subsequent danger to patients has not been studied under the situation according to the classification of drugs by efficacy. Therefore, we analyzed the relationship between position and time regarding the occurrence of dispensing errors. Furthermore, we investigated the relationship between occurrence timing of them and danger to patients. In this study, dispensing errors and incidents in three categories (drug name errors, drug strength errors, drug count errors) were classified into two groups in terms of its drug efficacy (efficacy similarity (-) group, efficacy similarity (+) group), into three classes in terms of the occurrence timing of dispensing errors (initial phase errors, middle phase errors, final phase errors). Then, the rates of damage shifting from "dispensing errors" to "damage to patients" were compared as an index of danger between two groups and among three classes. Consequently, the rate of damage in "efficacy similarity (-) group" was significantly higher than that in "efficacy similarity (+) group". Furthermore, the rate of damage is the highest in "initial phase errors", the lowest in "final phase errors" among three classes. From the results of this study, it became clear that the earlier the timing of dispensing errors occurs, the more severe the damage to patients becomes.

  4. Agreeableness and Conscientiousness as Predictors of University Students' Self/Peer-Assessment Rating Error

    ERIC Educational Resources Information Center

    Birjandi, Parviz; Siyyari, Masood

    2016-01-01

    This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…

  5. National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?

    ERIC Educational Resources Information Center

    Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.

    2010-01-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…

  6. The Relationship of Error Rate and Comprehension in Second and Third Grade Oral Reading Fluency

    ERIC Educational Resources Information Center

    Abbott, Mary; Wills, Howard; Miller, Angela; Kaufman, Journ

    2012-01-01

    This study explored the relationships of oral reading speed and error rate on comprehension with second and third grade students with identified reading risk. The study included 920 second and 974 third graders. Results found a significant relationship between error rate, oral reading fluency, and reading comprehension performance, and…

  7. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  8. False Positives in Multiple Regression: Unanticipated Consequences of Measurement Error in the Predictor Variables

    ERIC Educational Resources Information Center

    Shear, Benjamin R.; Zumbo, Bruno D.

    2013-01-01

    Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…

  9. Decrease in medical command errors with use of a "standing orders" protocol system.

    PubMed

    Holliman, C J; Wuerz, R C; Meador, S A

    1994-05-01

    The purpose of this study was to determine the physician medical command error rates and paramedic error rates after implementation of a "standing orders" protocol system for medical command. These patient-care error rates were compared with the previously reported rates for a "required call-in" medical command system (Ann Emerg Med 1992; 21(4):347-350). A secondary aim of the study was to determine if the on-scene time interval was increased by the standing orders system. Prospectively conducted audit of prehospital advanced life support (ALS) trip sheets was made at an urban ALS paramedic service with on-line physician medical command from three local hospitals. All ALS run sheets from the start time of the standing orders system (April 1, 1991) for a 1-year period ending on March 30, 1992 were reviewed as part of an ongoing quality assurance program. Cases were identified as nonjustifiably deviating from regional emergency medical services (EMS) protocols as judged by agreement of three physician reviewers (the same methodology as a previously reported command error study in the same ALS system). Medical command and paramedic errors were identified from the prehospital ALS run sheets and categorized. Two thousand one ALS runs were reviewed; 24 physician errors (1.2% of the 1,928 "command" runs) and eight paramedic errors (0.4% of runs) were identified. The physician error rate was decreased from the 2.6% rate in the previous study (P < .0001 by chi 2 analysis). The on-scene time interval did not increase with the "standing orders" system.(ABSTRACT TRUNCATED AT 250 WORDS)

  10. Quantifying Data Quality for Clinical Trials Using Electronic Data Capture

    PubMed Central

    Nahm, Meredith L.; Pieper, Carl F.; Cunningham, Maureen M.

    2008-01-01

    Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks. PMID:18725958

  11. Meet The Simpsons: top-down effects in face learning.

    PubMed

    Bonner, Lesley; Burton, A Mike; Jenkins, Rob; McNeill, Allan; Vicki, Bruce

    2003-01-01

    We examined whether prior knowledge of a person affects the visual processes involved in learning a face. In two experiments, subjects were taught to associate human faces with characters they knew (from the TV show The Simpsons) or characters they did not (novel names). In each experiment, knowledge of the character predicted performance in a recognition memory test, relying only on old/new confidence ratings. In experiment 1, we established the technique and showed that there is a face-learning advantage for known people, even when face items are counterbalanced for familiarity across the experiment. In experiment 2 we replicated the effect in a setting which discouraged subjects from attending more to known than unknown people, and eliminated any visual association between face stimuli and a character from The Simpsons. We conclude that prior knowledge about a person can enhance learning of a new face.

  12. An electrophysiological investigation of the role of orthography in accessing meaning of Chinese single-character words.

    PubMed

    Wang, Kui

    2011-01-10

    This study reported the role of orthography in semantic activation processes of Chinese single-character words. Eighteen native Chinese speaking adults were recruited to take part in a Stroop experiment consisting of one-character color words and pseudowords which were orthographically similar to these color words. Classic behavioral Stroop effects, namely longer reaction times for incongruent conditions than for congruent conditions, were demonstrated for color words and pseudowords. A clear N450 was also observed in the two incongruent conditions. The participants were also asked to perform a visual judgment task immediately following the Stroop experiment. Results from the visual judgment task showed that participants could distinguish color words and pseudowords well (with a mean accuracy rate over 90 percent). Taken together, these findings support the direct orthography-semantic route in Chinese one-character words. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  13. Intelligent form removal with character stroke preservation

    NASA Astrophysics Data System (ADS)

    Garris, Michael D.

    1996-03-01

    A new technique for intelligent form removal has been developed along with a new method for evaluating its impact on optical character recognition (OCR). All the dominant lines in the image are automatically detected using the Hough line transform and intelligently erased while simultaneously preserving overlapping character strokes by computing line width statistics and keying off of certain visual cues. This new method of form removal operates on loosely defined zones with no image deskewing. Any field in which the writer is provided a horizontal line to enter a response can be processed by this method. Several examples of processed fields are provided, including a comparison of results between the new method and a commercially available forms removal package. Even if this new form removal method did not improve character recognition accuracy, it is still a significant improvement to the technology because the requirement of a priori knowledge of the form's geometric details has been greatly reduced. This relaxes the recognition system's dependence on rigid form design, printing, and reproduction by automatically detecting and removing some of the physical structures (lines) on the form. Using the National Institute of Standards and Technology (NIST) public domain form-based handprint recognition system, the technique was tested on a large number of fields containing randomly ordered handprinted lowercase alphabets, as these letters (especially those with descenders) frequently touch and extend through the line along which they are written. Preserving character strokes improves overall lowercase recognition performance by 3%, which is a net improvement, but a single performance number like this doesn't communicate how the recognition process was really influenced. There is expected to be trade- offs with the introduction of any new technique into a complex recognition system. To understand both the improvements and the trade-offs, a new analysis was designed to compare the statistical distributions of individual confusion pairs between two systems. As OCR technology continues to improve, sophisticated analyses like this are necessary to reduce the errors remaining in complex recognition problems.

  14. Effect of atmospheric turbulence on the bit error probability of a space to ground near infrared laser communications link using binary pulse position modulation and an avalanche photodiode detector

    NASA Technical Reports Server (NTRS)

    Safren, H. G.

    1987-01-01

    The effect of atmospheric turbulence on the bit error rate of a space-to-ground near infrared laser communications link is investigated, for a link using binary pulse position modulation and an avalanche photodiode detector. Formulas are presented for the mean and variance of the bit error rate as a function of signal strength. Because these formulas require numerical integration, they are of limited practical use. Approximate formulas are derived which are easy to compute and sufficiently accurate for system feasibility studies, as shown by numerical comparison with the exact formulas. A very simple formula is derived for the bit error rate as a function of signal strength, which requires only the evaluation of an error function. It is shown by numerical calculations that, for realistic values of the system parameters, the increase in the bit error rate due to turbulence does not exceed about thirty percent for signal strengths of four hundred photons per bit or less. The increase in signal strength required to maintain an error rate of one in 10 million is about one or two tenths of a db.

  15. The random coding bound is tight for the average code.

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.

    1973-01-01

    The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.

  16. Fast approach for toner saving

    NASA Astrophysics Data System (ADS)

    Safonov, Ilia V.; Kurilin, Ilya V.; Rychagov, Michael N.; Lee, Hokeun; Kim, Sangho; Choi, Donchul

    2011-01-01

    Reducing toner consumption is an important task in modern printing devices and has a significant positive ecological impact. Existing toner saving approaches have two main drawbacks: appearance of hardcopy in toner saving mode is worse in comparison with normal mode; processing of whole rendered page bitmap requires significant computational costs. We propose to add small holes of various shapes and sizes to random places inside a character bitmap stored in font cache. Such random perforation scheme is based on processing pipeline in RIP of standard printer languages Postscript and PCL. Processing of text characters only, and moreover, processing of each character for given font and size alone, is an extremely fast procedure. The approach does not deteriorate halftoned bitmap and business graphics and provide toner saving for typical office documents up to 15-20%. Rate of toner saving is adjustable. Alteration of resulted characters' appearance is almost indistinguishable in comparison with solid black text due to random placement of small holes inside the character regions. The suggested method automatically skips small fonts to preserve its quality. Readability of text processed by proposed method is fine. OCR programs process that scanned hardcopy successfully too.

  17. Putative floral brood-site mimicry, loss of autonomous selfing, and reduced vegetative growth are significantly correlated with increased diversification in Asarum (Aristolochiaceae).

    PubMed

    Sinn, Brandon T; Kelly, Lawrence M; Freudenstein, John V

    2015-08-01

    The drivers of angiosperm diversity have long been sought and the flower-arthropod association has often been invoked as the most powerful driver of the angiosperm radiation. We now know that features that influence arthropod interactions cannot only affect the diversification of lineages, but also expedite or constrain their rate of extinction, which can equally influence the observed asymmetric richness of extant angiosperm lineages. The genus Asarum (Aristolochiaceae; ∼100 species) is widely distributed in north temperate forests, with substantial vegetative and floral divergence between its three major clades, Euasarum, Geotaenium, and Heterotropa. We used Binary-State Speciation and Extinction Model (BiSSE) Net Diversification tests of character state distributions on a Maximum Likelihood phylogram and a Coalescent Bayesian species tree, inferred from seven chloroplast markers and nuclear rDNA, to test for signal of asymmetric diversification, character state transition, and extinction rates of floral and vegetative characters. We found that reduction in vegetative growth, loss of autonomous self-pollination, and the presence of putative fungal-mimicking floral structures are significantly correlated with increased diversification in Asarum. No significant difference in model likelihood was identified between symmetric and asymmetric rates of character state transitions or extinction. We conclude that the flowers of the Heterotropa clade may have converged on some aspects of basidiomycete sporocarp morphology and that brood-site mimicry, coupled with a reduction in vegetative growth and the loss of autonomous self-pollination, may have driven diversification within Asarum. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. A prospective audit of a nurse independent prescribing within critical care.

    PubMed

    Carberry, Martin; Connelly, Sarah; Murphy, Jennifer

    2013-05-01

    To determine the prescribing activity of different staff groups within intensive care unit (ICU) and combined high dependency unit (HDU), namely trainee and consultant medical staff and advanced nurse practitioners in critical care (ANPCC); to determine the number and type of prescription errors; to compare error rates between prescribing groups and to raise awareness of prescribing activity within critical care. The introduction of government legislation has led to the development of non-medical prescribing roles in acute care. This has facilitated an opportunity for the ANPCC working in critical care to develop a prescribing role. The audit was performed over 7 days (Monday-Sunday), on rolling days over a 7-week period in September and October 2011 in three ICUs. All drug entries made on the ICU prescription by the three groups, trainee medical staff, ANPCCs and consultant anaesthetists, were audited once for errors. Data were collected by reviewing all drug entries for errors namely, patient data, drug dose, concentration, rate and frequency, legibility and prescriber signature. A paper data collection tool was used initially; data was later entered onto a Microsoft Access data base. A total of 1418 drug entries were audited from 77 patient prescription Cardexes. Error rates were reported as, 40 errors in 1418 prescriptions (2·8%): ANPCC errors, n = 2 in 388 prescriptions (0·6%); trainee medical staff errors, n = 33 in 984 (3·4%); consultant errors, n = 5 in 73 (6·8%). The error rates were significantly different for different prescribing groups (p < 0·01). This audit shows that prescribing error rates were low (2·8%). Having the lowest error rate, the nurse practitioners are at least as effective as other prescribing groups within this audit, in terms of errors only, in prescribing diligence. National data is required in order to benchmark independent nurse prescribing practice in critical care. These findings could be used to inform research and role development within the critical care. © 2012 The Authors. Nursing in Critical Care © 2012 British Association of Critical Care Nurses.

  19. Separate Medication Preparation Rooms Reduce Interruptions and Medication Errors in the Hospital Setting: A Prospective Observational Study.

    PubMed

    Huckels-Baumgart, Saskia; Baumgart, André; Buschmann, Ute; Schüpfer, Guido; Manser, Tanja

    2016-12-21

    Interruptions and errors during the medication process are common, but published literature shows no evidence supporting whether separate medication rooms are an effective single intervention in reducing interruptions and errors during medication preparation in hospitals. We tested the hypothesis that the rate of interruptions and reported medication errors would decrease as a result of the introduction of separate medication rooms. Our aim was to evaluate the effect of separate medication rooms on interruptions during medication preparation and on self-reported medication error rates. We performed a preintervention and postintervention study using direct structured observation of nurses during medication preparation and daily structured medication error self-reporting of nurses by questionnaires in 2 wards at a major teaching hospital in Switzerland. A volunteer sample of 42 nurses was observed preparing 1498 medications for 366 patients over 17 hours preintervention and postintervention on both wards. During 122 days, nurses completed 694 reporting sheets containing 208 medication errors. After the introduction of the separate medication room, the mean interruption rate decreased significantly from 51.8 to 30 interruptions per hour (P < 0.01), and the interruption-free preparation time increased significantly from 1.4 to 2.5 minutes (P < 0.05). Overall, the mean medication error rate per day was also significantly reduced after implementation of the separate medication room from 1.3 to 0.9 errors per day (P < 0.05). The present study showed the positive effect of a hospital-based intervention; after the introduction of the separate medication room, the interruption and medication error rates decreased significantly.

  20. Evaluation of genomic high-throughput sequencing data generated on Illumina HiSeq and Genome Analyzer systems

    PubMed Central

    2011-01-01

    Background The generation and analysis of high-throughput sequencing data are becoming a major component of many studies in molecular biology and medical research. Illumina's Genome Analyzer (GA) and HiSeq instruments are currently the most widely used sequencing devices. Here, we comprehensively evaluate properties of genomic HiSeq and GAIIx data derived from two plant genomes and one virus, with read lengths of 95 to 150 bases. Results We provide quantifications and evidence for GC bias, error rates, error sequence context, effects of quality filtering, and the reliability of quality values. By combining different filtering criteria we reduced error rates 7-fold at the expense of discarding 12.5% of alignable bases. While overall error rates are low in HiSeq data we observed regions of accumulated wrong base calls. Only 3% of all error positions accounted for 24.7% of all substitution errors. Analyzing the forward and reverse strands separately revealed error rates of up to 18.7%. Insertions and deletions occurred at very low rates on average but increased to up to 2% in homopolymers. A positive correlation between read coverage and GC content was found depending on the GC content range. Conclusions The errors and biases we report have implications for the use and the interpretation of Illumina sequencing data. GAIIx and HiSeq data sets show slightly different error profiles. Quality filtering is essential to minimize downstream analysis artifacts. Supporting previous recommendations, the strand-specificity provides a criterion to distinguish sequencing errors from low abundance polymorphisms. PMID:22067484

  1. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  2. Improving the prediction of going concern of Taiwanese listed companies using a hybrid of LASSO with data mining techniques.

    PubMed

    Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De

    2016-01-01

    The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).

  3. Prescription errors before and after introduction of electronic medication alert system in a pediatric emergency department.

    PubMed

    Sethuraman, Usha; Kannikeswaran, Nirupama; Murray, Kyle P; Zidan, Marwan A; Chamberlain, James M

    2015-06-01

    Prescription errors occur frequently in pediatric emergency departments (PEDs).The effect of computerized physician order entry (CPOE) with electronic medication alert system (EMAS) on these is unknown. The objective was to compare prescription errors rates before and after introduction of CPOE with EMAS in a PED. The hypothesis was that CPOE with EMAS would significantly reduce the rate and severity of prescription errors in the PED. A prospective comparison of a sample of outpatient, medication prescriptions 5 months before and after CPOE with EMAS implementation (7,268 before and 7,292 after) was performed. Error types and rates, alert types and significance, and physician response were noted. Medication errors were deemed significant if there was a potential to cause life-threatening injury, failure of therapy, or an adverse drug effect. There was a significant reduction in the errors per 100 prescriptions (10.4 before vs. 7.3 after; absolute risk reduction = 3.1, 95% confidence interval [CI] = 2.2 to 4.0). Drug dosing error rates decreased from 8 to 5.4 per 100 (absolute risk reduction = 2.6, 95% CI = 1.8 to 3.4). Alerts were generated for 29.6% of prescriptions, with 45% involving drug dose range checking. The sensitivity of CPOE with EMAS in identifying errors in prescriptions was 45.1% (95% CI = 40.8% to 49.6%), and the specificity was 57% (95% CI = 55.6% to 58.5%). Prescribers modified 20% of the dosing alerts, resulting in the error not reaching the patient. Conversely, 11% of true dosing alerts for medication errors were overridden by the prescribers: 88 (11.3%) resulted in medication errors, and 684 (88.6%) were false-positive alerts. A CPOE with EMAS was associated with a decrease in overall prescription errors in our PED. Further system refinements are required to reduce the high false-positive alert rates. © 2015 by the Society for Academic Emergency Medicine.

  4. Performance improvement of robots using a learning control scheme

    NASA Technical Reports Server (NTRS)

    Krishna, Ramuhalli; Chiang, Pen-Tai; Yang, Jackson C. S.

    1987-01-01

    Many applications of robots require that the same task be repeated a number of times. In such applications, the errors associated with one cycle are also repeated every cycle of the operation. An off-line learning control scheme is used here to modify the command function which would result in smaller errors in the next operation. The learning scheme is based on a knowledge of the errors and error rates associated with each cycle. Necessary conditions for the iterative scheme to converge to zero errors are derived analytically considering a second order servosystem model. Computer simulations show that the errors are reduced at a faster rate if the error rate is included in the iteration scheme. The results also indicate that the scheme may increase the magnitude of errors if the rate information is not included in the iteration scheme. Modification of the command input using a phase and gain adjustment is also proposed to reduce the errors with one attempt. The scheme is then applied to a computer model of a robot system similar to PUMA 560. Improved performance of the robot is shown by considering various cases of trajectory tracing. The scheme can be successfully used to improve the performance of actual robots within the limitations of the repeatability and noise characteristics of the robot.

  5. Improved compliance with the World Health Organization Surgical Safety Checklist is associated with reduced surgical specimen labelling errors.

    PubMed

    Martis, Walston R; Hannam, Jacqueline A; Lee, Tracey; Merry, Alan F; Mitchell, Simon J

    2016-09-09

    A new approach to administering the surgical safety checklist (SSC) at our institution using wall-mounted charts for each SSC domain coupled with migrated leadership among operating room (OR) sub-teams, led to improved compliance with the Sign Out domain. Since surgical specimens are reviewed at Sign Out, we aimed to quantify any related change in surgical specimen labelling errors. Prospectively maintained error logs for surgical specimens sent to pathology were examined for the six months before and after introduction of the new SSC administration paradigm. We recorded errors made in the labelling or completion of the specimen pot and on the specimen laboratory request form. Total error rates were calculated from the number of errors divided by total number of specimens. Rates from the two periods were compared using a chi square test. There were 19 errors in 4,760 specimens (rate 3.99/1,000) and eight errors in 5,065 specimens (rate 1.58/1,000) before and after the change in SSC administration paradigm (P=0.0225). Improved compliance with administering the Sign Out domain of the SSC can reduce surgical specimen errors. This finding provides further evidence that OR teams should optimise compliance with the SSC.

  6. Citation Help in Databases: The More Things Change, the More They Stay the Same

    ERIC Educational Resources Information Center

    Van Ullen, Mary; Kessler, Jane

    2012-01-01

    In 2005, the authors reviewed citation help in databases and found an error rate of 4.4 errors per citation. This article describes a follow-up study that revealed a modest improvement in the error rate to 3.4 errors per citation, still unacceptably high. The most problematic area was retrieval statements. The authors conclude that librarians…

  7. Mimicking Aphasic Semantic Errors in Normal Speech Production: Evidence from a Novel Experimental Paradigm

    ERIC Educational Resources Information Center

    Hodgson, Catherine; Lambon Ralph, Matthew A.

    2008-01-01

    Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study…

  8. Physical fault tolerance of nanoelectronics.

    PubMed

    Szkopek, Thomas; Roychowdhury, Vwani P; Antoniadis, Dimitri A; Damoulakis, John N

    2011-04-29

    The error rate in complementary transistor circuits is suppressed exponentially in electron number, arising from an intrinsic physical implementation of fault-tolerant error correction. Contrariwise, explicit assembly of gates into the most efficient known fault-tolerant architecture is characterized by a subexponential suppression of error rate with electron number, and incurs significant overhead in wiring and complexity. We conclude that it is more efficient to prevent logical errors with physical fault tolerance than to correct logical errors with fault-tolerant architecture.

  9. Comparison of Agar Dilution, Disk Diffusion, MicroScan, and Vitek Antimicrobial Susceptibility Testing Methods to Broth Microdilution for Detection of Fluoroquinolone-Resistant Isolates of the Family Enterobacteriaceae

    PubMed Central

    Steward, Christine D.; Stocker, Sheila A.; Swenson, Jana M.; O’Hara, Caroline M.; Edwards, Jonathan R.; Gaynes, Robert P.; McGowan, John E.; Tenover, Fred C.

    1999-01-01

    Fluoroquinolone resistance appears to be increasing in many species of bacteria, particularly in those causing nosocomial infections. However, the accuracy of some antimicrobial susceptibility testing methods for detecting fluoroquinolone resistance remains uncertain. Therefore, we compared the accuracy of the results of agar dilution, disk diffusion, MicroScan Walk Away Neg Combo 15 conventional panels, and Vitek GNS-F7 cards to the accuracy of the results of the broth microdilution reference method for detection of ciprofloxacin and ofloxacin resistance in 195 clinical isolates of the family Enterobacteriaceae collected from six U.S. hospitals for a national surveillance project (Project ICARE [Intensive Care Antimicrobial Resistance Epidemiology]). For ciprofloxacin, very major error rates were 0% (disk diffusion and MicroScan), 0.9% (agar dilution), and 2.7% (Vitek), while major error rates ranged from 0% (agar dilution) to 3.7% (MicroScan and Vitek). Minor error rates ranged from 12.3% (agar dilution) to 20.5% (MicroScan). For ofloxacin, no very major errors were observed, and major errors were noted only with MicroScan (3.7% major error rate). Minor error rates ranged from 8.2% (agar dilution) to 18.5% (Vitek). Minor errors for all methods were substantially reduced when results with MICs within ±1 dilution of the broth microdilution reference MIC were excluded from analysis. However, the high number of minor errors by all test systems remains a concern. PMID:9986809

  10. Automated extraction of radiation dose information from CT dose report images.

    PubMed

    Li, Xinhua; Zhang, Da; Liu, Bob

    2011-06-01

    The purpose of this article is to describe the development of an automated tool for retrieving texts from CT dose report images. Optical character recognition was adopted to perform text recognitions of CT dose report images. The developed tool is able to automate the process of analyzing multiple CT examinations, including text recognition, parsing, error correction, and exporting data to spreadsheets. The results were precise for total dose-length product (DLP) and were about 95% accurate for CT dose index and DLP of scanned series.

  11. The application of dummy noise adaptive Kalman filter in underwater navigation

    NASA Astrophysics Data System (ADS)

    Li, Song; Zhang, Chun-Hua; Luan, Jingde

    2011-10-01

    The track of underwater target is easy to be affected by the various by the various factors, which will cause poor performance in Kalman filter with the error in the state and measure model. In order to solve the situation, a method is provided with dummy noise compensative technology. Dummy noise is added to state and measure model artificially, and then the question can be solved by the adaptive Kalman filter with unknown time-changed statistical character. The simulation result of underwater navigation proves the algorithm is effective.

  12. 78 FR 40553 - Privacy Act of 1974: Republication of Notice of Systems of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-05

    ...In accordance with 5 U.S.C. 552a(e)(4), the Tennessee Valley Authority (TVA) is republishing in full a notice of the existence and character of each TVA system of records. TVA is correcting minor typographical and stylistic errors in previously existing notices and has updated those notices to reflect current organizational structure. Also, updates are being made to show any changes to system locations; managers and addresses; categories of individuals and records; procedures and practices for storing, retrieving, accessing, retaining, and disposing of records.

  13. How To Succeed in Promoting Your Web Site: The Impact of Search Engine Registration on Retrieval of a World Wide Web Site.

    ERIC Educational Resources Information Center

    Tunender, Heather; Ervin, Jane

    1998-01-01

    Character strings were planted in a World Wide Web site (Project Whistlestop) to test indexing and retrieval rates of five Web search tools (Lycos, infoseek, AltaVista, Yahoo, Excite). It was found that search tools indexed few of the planted character strings, none indexed the META descriptor tag, and only Excite indexed into the 3rd-4th site…

  14. Quantizing and sampling considerations in digital phased-locked loops

    NASA Technical Reports Server (NTRS)

    Hurst, G. T.; Gupta, S. C.

    1974-01-01

    The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.

  15. Organizational safety culture and medical error reporting by Israeli nurses.

    PubMed

    Kagan, Ilya; Barnoy, Sivia

    2013-09-01

    To investigate the association between patient safety culture (PSC) and the incidence and reporting rate of medical errors by Israeli nurses. Self-administered structured questionnaires were distributed to a convenience sample of 247 registered nurses enrolled in training programs at Tel Aviv University (response rate = 91%). The questionnaire's three sections examined the incidence of medication mistakes in clinical practice, the reporting rate for these errors, and the participants' views and perceptions of the safety culture in their workplace at three levels (organizational, departmental, and individual performance). Pearson correlation coefficients, t tests, and multiple regression analysis were used to analyze the data. Most nurses encountered medical errors from a daily to a weekly basis. Six percent of the sample never reported their own errors, while half reported their own errors "rarely or sometimes." The level of PSC was positively and significantly correlated with the error reporting rate. PSC, place of birth, error incidence, and not having an academic nursing degree were significant predictors of error reporting, together explaining 28% of variance. This study confirms the influence of an organizational safety climate on readiness to report errors. Senior healthcare executives and managers can make a major impact on safety culture development by creating and promoting a vision and strategy for quality and safety and fostering their employees' motivation to implement improvement programs at the departmental and individual level. A positive, carefully designed organizational safety culture can encourage error reporting by staff and so improve patient safety. © 2013 Sigma Theta Tau International.

  16. Software for Quantifying and Simulating Microsatellite Genotyping Error

    PubMed Central

    Johnson, Paul C.D.; Haydon, Daniel T.

    2007-01-01

    Microsatellite genetic marker data are exploited in a variety of fields, including forensics, gene mapping, kinship inference and population genetics. In all of these fields, inference can be thwarted by failure to quantify and account for data errors, and kinship inference in particular can benefit from separating errors into two distinct classes: allelic dropout and false alleles. Pedant is MS Windows software for estimating locus-specific maximum likelihood rates of these two classes of error. Estimation is based on comparison of duplicate error-prone genotypes: neither reference genotypes nor pedigree data are required. Other functions include: plotting of error rate estimates and confidence intervals; simulations for performing power analysis and for testing the robustness of error rate estimates to violation of the underlying assumptions; and estimation of expected heterozygosity, which is a required input. The program, documentation and source code are available from http://www.stats.gla.ac.uk/~paulj/pedant.html. PMID:20066126

  17. Task errors by emergency physicians are associated with interruptions, multitasking, fatigue and working memory capacity: a prospective, direct observation study.

    PubMed

    Westbrook, Johanna I; Raban, Magdalena Z; Walter, Scott R; Douglas, Heather

    2018-01-09

    Interruptions and multitasking have been demonstrated in experimental studies to reduce individuals' task performance. These behaviours are frequently used by clinicians in high-workload, dynamic clinical environments, yet their effects have rarely been studied. To assess the relative contributions of interruptions and multitasking by emergency physicians to prescribing errors. 36 emergency physicians were shadowed over 120 hours. All tasks, interruptions and instances of multitasking were recorded. Physicians' working memory capacity (WMC) and preference for multitasking were assessed using the Operation Span Task (OSPAN) and Inventory of Polychronic Values. Following observation, physicians were asked about their sleep in the previous 24 hours. Prescribing errors were used as a measure of task performance. We performed multivariate analysis of prescribing error rates to determine associations with interruptions and multitasking, also considering physician seniority, age, psychometric measures, workload and sleep. Physicians experienced 7.9 interruptions/hour. 28 clinicians were observed prescribing 239 medication orders which contained 208 prescribing errors. While prescribing, clinicians were interrupted 9.4 times/hour. Error rates increased significantly if physicians were interrupted (rate ratio (RR) 2.82; 95% CI 1.23 to 6.49) or multitasked (RR 1.86; 95% CI 1.35 to 2.56) while prescribing. Having below-average sleep showed a >15-fold increase in clinical error rate (RR 16.44; 95% CI 4.84 to 55.81). WMC was protective against errors; for every 10-point increase on the 75-point OSPAN, a 19% decrease in prescribing errors was observed. There was no effect of polychronicity, workload, physician gender or above-average sleep on error rates. Interruptions, multitasking and poor sleep were associated with significantly increased rates of prescribing errors among emergency physicians. WMC mitigated the negative influence of these factors to an extent. These results confirm experimental findings in other fields and raise questions about the acceptability of the high rates of multitasking and interruption in clinical environments. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  18. Unlocking the "Black box": internal female genitalia in Sepsidae (Diptera) evolve fast and are species-specific

    PubMed Central

    2010-01-01

    Background The species-specificity of male genitalia has been well documented in many insect groups and sexual selection has been proposed as the evolutionary force driving the often rapid, morphological divergence. The internal female genitalia, in sharp contrast, remain poorly studied. Here, we present the first comparative study of the internal reproductive system of Sepsidae. We test the species-specificity of the female genitalia by comparing recently diverged sister taxa. We also compare the rate of change in female morphological characters with the rate of fast-evolving, molecular and behavioral characters. Results We describe the ectodermal parts of the female reproductive tract for 41 species representing 21 of the 37 described genera and define 19 morphological characters with discontinuous variation found in eight structures that are part of the reproductive tract. Using a well-resolved molecular phylogeny based on 10 genes, we reconstruct the evolution of these characters across the family [120 steps; Consistency Index (CI): 0.41]. Two structures, in particular, evolve faster than the rest. The first is the ventral receptacle, which is a secondary sperm storage organ. It accounts for more than half of all the evolutionary changes observed (7 characters; 61 steps; CI: 0.46). It is morphologically diverse across genera, can be bi-lobed or multi-chambered (up to 80 chambers), and is strongly sclerotized in one clade. The second structure is the dorsal sclerite, which is present in all sepsids except Orygma luctuosum and Ortalischema albitarse. It is associated with the opening of the spermathecal ducts and is often distinct even among sister species (4 characters; 16 steps; CI: 0.56). Conclusions We find the internal female genitalia are diverse in Sepsidae and diagnostic for all species. In particular, fast-evolving structures like the ventral receptacle and dorsal sclerite are likely involved in post-copulatory sexual selection. In comparison to behavioral and molecular data, the female structures are evolving 2/3 as fast as the non-constant third positions of the COI barcoding gene. They display less convergent evolution in characters (CI = 0.54) than the third positions or sepsid mating behavior (CICOI = 0.36; CIBEHAV = 0.45). PMID:20831809

  19. Model studies of the beam-filling error for rain-rate retrieval with microwave radiometers

    NASA Technical Reports Server (NTRS)

    Ha, Eunho; North, Gerald R.

    1995-01-01

    Low-frequency (less than 20 GHz) single-channel microwave retrievals of rain rate encounter the problem of beam-filling error. This error stems from the fact that the relationship between microwave brightness temperature and rain rate is nonlinear, coupled with the fact that the field of view is large or comparable to important scales of variability of the rain field. This means that one may not simply insert the area average of the brightness temperature into the formula for rain rate without incurring both bias and random error. The statistical heterogeneity of the rain-rate field in the footprint of the instrument is key to determining the nature of these errors. This paper makes use of a series of random rain-rate fields to study the size of the bias and random error associated with beam filling. A number of examples are analyzed in detail: the binomially distributed field, the gamma, the Gaussian, the mixed gamma, the lognormal, and the mixed lognormal ('mixed' here means there is a finite probability of no rain rate at a point of space-time). Of particular interest are the applicability of a simple error formula due to Chiu and collaborators and a formula that might hold in the large field of view limit. It is found that the simple formula holds for Gaussian rain-rate fields but begins to fail for highly skewed fields such as the mixed lognormal. While not conclusively demonstrated here, it is suggested that the notionof climatologically adjusting the retrievals to remove the beam-filling bias is a reasonable proposition.

  20. Machine Translation of Public Health Materials From English to Chinese: A Feasibility Study

    PubMed Central

    Desai, Loma

    2015-01-01

    Background Chinese is the second most common language spoken by limited English proficiency individuals in the United States, yet there are few public health materials available in Chinese. Previous studies have indicated that use of machine translation plus postediting by bilingual translators generated quality translations in a lower time and at a lower cost than human translations. Objective The purpose of this study was to investigate the feasibility of using machine translation (MT) tools (eg, Google Translate) followed by human postediting (PE) to produce quality Chinese translations of public health materials. Methods From state and national public health websites, we collected 60 health promotion documents that had been translated from English to Chinese through human translation. The English version of the documents were then translated to Chinese using Google Translate. The MTs were analyzed for translation errors. A subset of the MT documents was postedited by native Chinese speakers with health backgrounds. Postediting time was measured. Postedited versions were then blindly compared against human translations by bilingual native Chinese quality raters. Results The most common machine translation errors were errors of word sense (40%) and word order (22%). Posteditors corrected the MTs at a rate of approximately 41 characters per minute. Raters, blinded to the source of translation, consistently selected the human translation over the MT+PE. Initial investigation to determine the reasons for the lower quality of MT+PE indicate that poor MT quality, lack of posteditor expertise, and insufficient posteditor instructions can be barriers to producing quality Chinese translations. Conclusions Our results revealed problems with using MT tools plus human postediting for translating public health materials from English to Chinese. Additional work is needed to improve MT and to carefully design postediting processes before the MT+PE approach can be used routinely in public health practice for a variety of language pairs. PMID:27227135

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  2. Decision support system for determining the contact lens for refractive errors patients with classification ID3

    NASA Astrophysics Data System (ADS)

    Situmorang, B. H.; Setiawan, M. P.; Tosida, E. T.

    2017-01-01

    Refractive errors are abnormalities of the refraction of light so that the shadows do not focus precisely on the retina resulting in blurred vision [1]. Refractive errors causing the patient should wear glasses or contact lenses in order eyesight returned to normal. The use of glasses or contact lenses in a person will be different from others, it is influenced by patient age, the amount of tear production, vision prescription, and astigmatic. Because the eye is one organ of the human body is very important to see, then the accuracy in determining glasses or contact lenses which will be used is required. This research aims to develop a decision support system that can produce output on the right contact lenses for refractive errors patients with a value of 100% accuracy. Iterative Dichotomize Three (ID3) classification methods will generate gain and entropy values of attributes that include code sample data, age of the patient, astigmatic, the ratio of tear production, vision prescription, and classes that will affect the outcome of the decision tree. The eye specialist test result for the training data obtained the accuracy rate of 96.7% and an error rate of 3.3%, the result test using confusion matrix obtained the accuracy rate of 96.1% and an error rate of 3.1%; for the data testing obtained accuracy rate of 100% and an error rate of 0.

  3. Variationally consistent discretization schemes and numerical algorithms for contact problems

    NASA Astrophysics Data System (ADS)

    Wohlmuth, Barbara

    We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of possible applications and show the performance of the space discretization scheme, non-linear solver, adaptive refinement process and time integration.

  4. Recent language experience influences cross-language activation in bilinguals with different scripts

    PubMed Central

    Li, Chuchu; Wang, Min; Lin, Candise Y

    2017-01-01

    Purpose This study aimed to examine whether the phonological information in the non-target language is activated and its influence on bilingual processing. Approach Using the Stroop paradigm, Mandarin-English bilinguals named the ink color of Chinese characters in English in Experiment 1 and named the Chinese characters in addition to the color naming in English in Experiment 2. Twenty-four participants were recruited in each experiment. In both experiments, the visual stimuli included color characters (e.g. 红, hong2, red), homophones of the color characters (e.g. 洪, hong2, flood), characters that only shared the same syllable segment with the color characters (S+T−, e.g. 轰, hong1, boom), characters that shared the same tone but differed in segments with the color characters (S−T+, e.g. 瓶, ping2, bottle), and neutral characters (e.g. 牵, qian1, leading through). Data and analysis Planned t-tests were conducted in which participants’ naming accuracy rate and naming latency in each phonological condition were compared with the neutral condition. Findings Experiment 1 only showed the classic Stroop effect in the color character condition. In Experiment 2, in addition to the classic Stroop effect, the congruent homophone condition (e.g. 洪in red) showed a significant Stroop interference effect. These results suggested that for bilingual speakers with different scripts, phonological information in the non-target language may not be automatically activated even though the written words in the non-target language were visually presented. However, if the phonological information of the non-target language is activated in advance, it could lead to competition between the two languages, likely at both the phonological and lemma levels. Originality and significance This study is among the first to investigate whether the translation of a word is phonologically encoded in bilinguals using the Stroop paradigm. The findings improve our understanding of the underlying mechanism of bilingual processing. PMID:29056862

  5. Mitigating errors caused by interruptions during medication verification and administration: interventions in a simulated ambulatory chemotherapy setting.

    PubMed

    Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia

    2014-11-01

    Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  6. Mitigating errors caused by interruptions during medication verification and administration: interventions in a simulated ambulatory chemotherapy setting

    PubMed Central

    Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia

    2014-01-01

    Background Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. Objective The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. Methods The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Results Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Conclusions Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. PMID:24906806

  7. TECHNICAL ADVANCES: Effects of genotyping protocols on success and errors in identifying individual river otters (Lontra canadensis) from their faeces.

    PubMed

    Hansen, Heidi; Ben-David, Merav; McDonald, David B

    2008-03-01

    In noninvasive genetic sampling, when genotyping error rates are high and recapture rates are low, misidentification of individuals can lead to overestimation of population size. Thus, estimating genotyping errors is imperative. Nonetheless, conducting multiple polymerase chain reactions (PCRs) at multiple loci is time-consuming and costly. To address the controversy regarding the minimum number of PCRs required for obtaining a consensus genotype, we compared consumer-style the performance of two genotyping protocols (multiple-tubes and 'comparative method') in respect to genotyping success and error rates. Our results from 48 faecal samples of river otters (Lontra canadensis) collected in Wyoming in 2003, and from blood samples of five captive river otters amplified with four different primers, suggest that use of the comparative genotyping protocol can minimize the number of PCRs per locus. For all but five samples at one locus, the same consensus genotypes were reached with fewer PCRs and with reduced error rates with this protocol compared to the multiple-tubes method. This finding is reassuring because genotyping errors can occur at relatively high rates even in tissues such as blood and hair. In addition, we found that loci that amplify readily and yield consensus genotypes, may still exhibit high error rates (7-32%) and that amplification with different primers resulted in different types and rates of error. Thus, assigning a genotype based on a single PCR for several loci could result in misidentification of individuals. We recommend that programs designed to statistically assign consensus genotypes should be modified to allow the different treatment of heterozygotes and homozygotes intrinsic to the comparative method. © 2007 The Authors.

  8. National suicide rates a century after Durkheim: do we know enough to estimate error?

    PubMed

    Claassen, Cynthia A; Yip, Paul S; Corcoran, Paul; Bossarte, Robert M; Lawrence, Bruce A; Currier, Glenn W

    2010-06-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the most widely used population-level suicide metric today. After reviewing the unique sources of bias incurred during stages of suicide data collection and concatenation, we propose a model designed to uniformly estimate error in future studies. A standardized method of error estimation uniformly applied to mortality data could produce data capable of promoting high quality analyses of cross-national research questions.

  9. Does Mckuer's Law Hold for Heart Rate Control via Biofeedback Display?

    NASA Technical Reports Server (NTRS)

    Courter, B. J.; Jex, H. R.

    1984-01-01

    Some persons can control their pulse rate with the aid of a biofeedback display. If the biofeedback display is modified to show the error between a command pulse-rate and the measured rate, a compensatory (error correcting) heart rate tracking control loop can be created. The dynamic response characteristics of this control loop when subjected to step and quasi-random disturbances were measured. The control loop includes a beat-to-beat cardiotachmeter differenced with a forcing function from a quasi-random input generator; the resulting error pulse-rate is displayed as feedback. The subject acts to null the displayed pulse-rate error, thereby closing a compensatory control loop. McRuer's Law should hold for this case. A few subjects already skilled in voluntary pulse-rate control were tested for heart-rate control response. Control-law properties are derived, such as: crossover frequency, stability margins, and closed-loop bandwidth. These are evaluated for a range of forcing functions and for step as well as random disturbances.

  10. Online automatic tuning and control for fed-batch cultivation

    PubMed Central

    van Straten, Gerrit; van der Pol, Leo A.; van Boxtel, Anton J. B.

    2007-01-01

    Performance of controllers applied in biotechnological production is often below expectation. Online automatic tuning has the capability to improve control performance by adjusting control parameters. This work presents automatic tuning approaches for model reference specific growth rate control during fed-batch cultivation. The approaches are direct methods that use the error between observed specific growth rate and its set point; systematic perturbations of the cultivation are not necessary. Two automatic tuning methods proved to be efficient, in which the adaptation rate is based on a combination of the error, squared error and integral error. These methods are relatively simple and robust against disturbances, parameter uncertainties, and initialization errors. Application of the specific growth rate controller yields a stable system. The controller and automatic tuning methods are qualified by simulations and laboratory experiments with Bordetella pertussis. PMID:18157554

  11. Total Dose Effects on Error Rates in Linear Bipolar Systems

    NASA Technical Reports Server (NTRS)

    Buchner, Stephen; McMorrow, Dale; Bernard, Muriel; Roche, Nicholas; Dusseau, Laurent

    2007-01-01

    The shapes of single event transients in linear bipolar circuits are distorted by exposure to total ionizing dose radiation. Some transients become broader and others become narrower. Such distortions may affect SET system error rates in a radiation environment. If the transients are broadened by TID, the error rate could increase during the course of a mission, a possibility that has implications for hardness assurance.

  12. Performance analysis of a cascaded coding scheme with interleaved outer code

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.

  13. Peculiarities of dislocation motion in aluminum with allowance for the Peierls relief in the presence of ultrasound

    NASA Astrophysics Data System (ADS)

    Arakelyan, M. M.

    2017-11-01

    The effect of ultrasound on motion of the Frenkel-Kontorova dislocations in aluminum has been studied with inclusion of the Peierls relief. A dislocation moves at a variable rate when overcoming the Peierls barrier. The dislocation mean free path is changed under action of ultrasound at various frequencies comparable to the dislocation transition time to a neighboring valley. The stress-strain dependences have been obtained for high and low strain rates. In both the cases, a disordering takes place; however, the disordering rates and characters are different. At the resonance frequency, the strain resistance decreases, the hardening stage is shortened and the disordering stage is elongated. The dependence of the coefficient of hardening on coordinate has three segments different in characters. The coefficient of hardening decreases at the resonance frequency.

  14. The effect of speaking rate on serial-order sound-level errors in normal healthy controls and persons with aphasia.

    PubMed

    Fossett, Tepanta R D; McNeil, Malcolm R; Pratt, Sheila R; Tompkins, Connie A; Shuster, Linda I

    Although many speech errors can be generated at either a linguistic or motoric level of production, phonetically well-formed sound-level serial-order errors are generally assumed to result from disruption of phonologic encoding (PE) processes. An influential model of PE (Dell, 1986; Dell, Burger & Svec, 1997) predicts that speaking rate should affect the relative proportion of these serial-order sound errors (anticipations, perseverations, exchanges). These predictions have been extended to, and have special relevance for persons with aphasia (PWA) because of the increased frequency with which speech errors occur and because their localization within the functional linguistic architecture may help in diagnosis and treatment. Supporting evidence regarding the effect of speaking rate on phonological encoding has been provided by studies using young normal language (NL) speakers and computer simulations. Limited data exist for older NL users and no group data exist for PWA. This study tested the phonologic encoding properties of Dell's model of speech production (Dell, 1986; Dell,et al., 1997), which predicts that increasing speaking rate affects the relative proportion of serial-order sound errors (i.e., anticipations, perseverations, and exchanges). The effects of speech rate on the error ratios of anticipation/exchange (AE), anticipation/perseveration (AP) and vocal reaction time (VRT) were examined in 16 normal healthy controls (NHC) and 16 PWA without concomitant motor speech disorders. The participants were recorded performing a phonologically challenging (tongue twister) speech production task at their typical and two faster speaking rates. A significant effect of increased rate was obtained for the AP but not the AE ratio. Significant effects of group and rate were obtained for VRT. Although the significant effect of rate for the AP ratio provided evidence that changes in speaking rate did affect PE, the results failed to support the model derived predictions regarding the direction of change for error type proportions. The current findings argued for an alternative concept of the role of activation and decay in influencing types of serial-order sound errors. Rather than a slow activation decay rate (Dell, 1986), the results of the current study were more compatible with an alternative explanation of rapid activation decay or slow build-up of residual activation.

  15. Weight control in schizophrenic patients through Sakata's Charting of Daily Weight Pattern and its associations with temperament and character.

    PubMed

    Miyoshi, Ryoei; Matsuo, Hisae; Naono-Nagatomo, Keiko; Ozono, Kazuhiko; Araki, Ryuji; Ishikawa, Michiko; Abe, Hiroshi; Taniguchi, Hiroshi; Ishida, Yasushi

    2014-02-01

    This study examined whether daily self-monitoring of weight and monthly interviews with a doctor improved eating habits and led to weight loss, and whether temperament and character traits affect weight change in persons with schizophrenia. Participants used Sakata's Charting of Daily Weight Pattern to monitor their weight daily. In addition, Sakata's Eating Behavior Questionnaire was administered to evaluate eating-behavior awareness. The Temperament and Character Inventory (TCI) was used to assess participants' temperament and character. Fifty patients were divided into two groups: the intervention group (n = 25) filled in Sakata's Charting of Daily Weight Pattern every day; was interviewed monthly by a doctor about weight management; was weighed monthly. The non-intervention group (n = 25) was only weighed monthly. The body mass index (mean ± standard error: 0.59 ± 0.10 kg/m(2), p < 0.001) of the intervention group decreased significantly while their scores on Sakata's Eating Behavior Questionnaire significantly improved albeit marginally. Conversely, body mass index increased significantly (0.66 ± 0.18 kg/m(2), p < 0.001) in the non-intervention group, whose scores on Sakata's Eating Behavior Questionnaire did not change significantly. Weight change and TCI scores were not correlated for the intervention group, but scores for "self-directedness" and weight gain in the non-intervention group had a marginally significant negative correlation (r = -0.33, p < 0.10). Our results suggest that monitoring one's weight daily on Sakata's Charting of Daily Weight Pattern led to improvements in eating behavior and a decrease in BMI of patients with schizophrenia. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Evaluation of TRMM Ground-Validation Radar-Rain Errors Using Rain Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Wolff, David B.

    2009-01-01

    Ground-validation (GV) radar-rain products are often utilized for validation of the Tropical Rainfall Measuring Mission (TRMM) spaced-based rain estimates, and hence, quantitative evaluation of the GV radar-rain product error characteristics is vital. This study uses quality-controlled gauge data to compare with TRMM GV radar rain rates in an effort to provide such error characteristics. The results show that significant differences of concurrent radar-gauge rain rates exist at various time scales ranging from 5 min to 1 day, despite lower overall long-term bias. However, the differences between the radar area-averaged rain rates and gauge point rain rates cannot be explained as due to radar error only. The error variance separation method is adapted to partition the variance of radar-gauge differences into the gauge area-point error variance and radar rain estimation error variance. The results provide relatively reliable quantitative uncertainty evaluation of TRMM GV radar rain estimates at various times scales, and are helpful to better understand the differences between measured radar and gauge rain rates. It is envisaged that this study will contribute to better utilization of GV radar rain products to validate versatile spaced-based rain estimates from TRMM, as well as the proposed Global Precipitation Measurement, and other satellites.

  17. Integrating Visual Mnemonics and Input Feedback With Passphrases to Improve the Usability and Security of Digital Authentication.

    PubMed

    Juang, Kevin; Greenstein, Joel

    2018-04-01

    We developed a new authentication system based on passphrases instead of passwords. Our new system incorporates a user-generated mnemonic picture displayed during login, definition tooltips, error correction to reduce typographical errors, a decoy-based input masking technique, and random passphrase generation using either a specialized wordlist or a sentence template. Passphrases exhibit a greater level of security than traditional passwords, but their wider adoption has been hindered by human factors issues. Our assertion is that the added features of our system work particularly well with passphrases and help address these shortcomings. We conducted a study to evaluate our new system with a customized 1,450-word list and our new system with a 6-word sentence structure against the control conditions of a user-created passphrase of at least 24 characters and a system-generated passphrase using a 10,326-word list. Fifty participants completed two sessions so that we could measure the usability and security of the authentication schemes. With the new system conditions, memorability was improved, and security was equivalent to or better than the control conditions. Usability and overall ratings also favored the new system conditions over the control conditions. Our research presents a new authentication system using innovative techniques that improve on the usability and security of existing password and passphrase authentication systems. In computer security, drastic changes should never happen overnight, but we recommend that our contributions be incorporated into current authentication systems to help facilitate a transition from passwords to usable passphrases.

  18. Rationale and design of a randomized trial to evaluate an evidence-based prescription drug label on actual medication use.

    PubMed

    Shrank, William H; Parker, Ruth; Davis, Terry; Pandit, Anjali U; Knox, Joann P; Moraras, Pear; Rademaker, Alfred; Wolf, Michael S

    2010-11-01

    Medication errors are an important public health concern, and poor understanding of medication labels are a root cause. Research shows that labels are variable, of poor quality, and not patient-centered. No real-world trials have evaluated whether improved medication labels can affect appropriate medication use, adherence or health outcomes. We developed an evidence-based prescription label that addresses both content and format. The enhanced label includes a universal medication schedule (UMS) that standardizes the directions for use incorporating 1) standard time periods for administration (morning, noon, evening, and bedtime), 2) numeric vs. alpha characters, 3) 'carriage returns' to separate daily dose and 4) a graphic aid to visually depict dose and frequency. We will evaluate the effect of providing this label to randomly sampled patients who receive their care from free clinics, mobile vans and federally qualified health centers (FQHCs) in Northern Virginia. We will recruit patients with diabetes or hypertension; these patients will be randomly assigned to receive all of their medications with improved labels or to receive prescriptions with standard labels. The primary outcome will be the patient's ability to correctly demonstrate dosing instructions. Other outcomes include adherence, error rates and health outcomes. To our knowledge, this trial is the first to evaluate the effect of prescription label improvement on understanding, medication use and outcomes in a clinical setting. If successful, these findings could be implemented broadly to promote safe and appropriate medication use and to support evidence-based standards in the development of labels. Copyright © 2010 Elsevier Inc. All rights reserved.

  19. Validation of prostate-specific antigen laboratory values recorded in Surveillance, Epidemiology, and End Results registries.

    PubMed

    Adamo, Margaret Peggy; Boten, Jessica A; Coyle, Linda M; Cronin, Kathleen A; Lam, Clara J K; Negoita, Serban; Penberthy, Lynne; Stevens, Jennifer L; Ward, Kevin C

    2017-02-15

    Researchers have used prostate-specific antigen (PSA) values collected by central cancer registries to evaluate tumors for potential aggressive clinical disease. An independent study collecting PSA values suggested a high error rate (18%) related to implied decimal points. To evaluate the error rate in the Surveillance, Epidemiology, and End Results (SEER) program, a comprehensive review of PSA values recorded across all SEER registries was performed. Consolidated PSA values for eligible prostate cancer cases in SEER registries were reviewed and compared with text documentation from abstracted records. Four types of classification errors were identified: implied decimal point errors, abstraction or coding implementation errors, nonsignificant errors, and changes related to "unknown" values. A total of 50,277 prostate cancer cases diagnosed in 2012 were reviewed. Approximately 94.15% of cases did not have meaningful changes (85.85% correct, 5.58% with a nonsignificant change of <1 ng/mL, and 2.80% with no clinical change). Approximately 5.70% of cases had meaningful changes (1.93% due to implied decimal point errors, 1.54% due to abstract or coding errors, and 2.23% due to errors related to unknown categories). Only 419 of the original 50,277 cases (0.83%) resulted in a change in disease stage due to a corrected PSA value. The implied decimal error rate was only 1.93% of all cases in the current validation study, with a meaningful error rate of 5.81%. The reasons for the lower error rate in SEER are likely due to ongoing and rigorous quality control and visual editing processes by the central registries. The SEER program currently is reviewing and correcting PSA values back to 2004 and will re-release these data in the public use research file. Cancer 2017;123:697-703. © 2016 American Cancer Society. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.

  20. Errors Affect Hypothetical Intertemporal Food Choice in Women

    PubMed Central

    Sellitto, Manuela; di Pellegrino, Giuseppe

    2014-01-01

    Growing evidence suggests that the ability to control behavior is enhanced in contexts in which errors are more frequent. Here we investigated whether pairing desirable food with errors could decrease impulsive choice during hypothetical temporal decisions about food. To this end, healthy women performed a Stop-signal task in which one food cue predicted high-error rate, and another food cue predicted low-error rate. Afterwards, we measured participants’ intertemporal preferences during decisions between smaller-immediate and larger-delayed amounts of food. We expected reduced sensitivity to smaller-immediate amounts of food associated with high-error rate. Moreover, taking into account that deprivational states affect sensitivity for food, we controlled for participants’ hunger. Results showed that pairing food with high-error likelihood decreased temporal discounting. This effect was modulated by hunger, indicating that, the lower the hunger level, the more participants showed reduced impulsive preference for the food previously associated with a high number of errors as compared with the other food. These findings reveal that errors, which are motivationally salient events that recruit cognitive control and drive avoidance learning against error-prone behavior, are effective in reducing impulsive choice for edible outcomes. PMID:25244534

  1. Study of Pattern of Change in Handwriting Class Characters with Different Grades of Myopia.

    PubMed

    Hedge, Shruti Prabhat; Dayanidhi, Vijay Kautilya; Sriram

    2015-12-01

    Handwriting is a visuo-motor skill highly dependent on visual skills. Any defect in the visual inputs could affect a change in the handwriting. Understanding the variation in handwriting characters caused by visual acuity change can help in identifying learning disabilities in children and also assess the disability in elderly. In our study we try to analyse and catalogue these changes in the handwriting of a person. The study was conducted among 100 subjects having normal visual acuity. They were asked to perform a set of writing tasks, after which the same tasks were repeated after inducing different grades of myopia. Changes in the handwriting class characters were analysed and compared in all grades of myopia. In the study it was found that the letter size, pastiosity, word omissions, inability to stay on line all increase with changes in visual acuity. However these finding are not proportional to the grade of myopia. From the findings of the study it can be concluded that myopia significantly influences the handwriting and any change in visual acuity would induce corresponding changes in handwriting. There is increase in letter size, pastiosity where as the ability to stay on line and space between the lines decrease in different grades of myopia. The changes are not linear and cannot be used to predict the grade of myopia but can be used as parameters suggestive of refractive error.

  2. Portrayals of schizophrenia by entertainment media: a content analysis of contemporary movies.

    PubMed

    Owen, Patricia R

    2012-07-01

    Critics of entertainment media have indicated that cinematic depictions of schizophrenia are stereotypic and characterized by misinformation about symptoms, causes, and treatment. The pervasiveness and nature of misinformation are difficult to ascertain because of the lack of empirically based studies of movies portraying schizophrenia. This study analyzed portrayals of schizophrenia in contemporary movies to ascertain prevalence of stereotypes and misinformation about schizophrenia. English-language movies featuring at least one main character with schizophrenia that were released for showing in theaters between 1990 and 2010 were analyzed for depictions of schizophrenia. Two researchers independently rated each character with a checklist that assessed demographic characteristics, symptoms and stereotypes, causation, and treatment. Forty-two characters from 41 movies were identified, a majority of whom were male and Caucasian. Most characters displayed positive symptoms of schizophrenia. Delusions were featured most frequently, followed by auditory and visual hallucinations. A majority of characters displayed violent behavior toward themselves or others, and nearly one-third of violent characters engaged in homicidal behavior. About one-fourth of characters committed suicide. Causation of schizophrenia was infrequently noted, although about one-fourth of movies implied that a traumatic life event was significant in causation. Of movies alluding to or showing treatment, psychotropic medications were most commonly portrayed. The finding that misinformation and negative portrayals of schizophrenia in contemporary movies are common underscores the importance of determining how viewers interpret media messages and how these interpretations inform attitudes and beliefs both of the general public and of people with schizophrenia.

  3. Single-locus species delimitation: a test of the mixed Yule-coalescent model, with an empirical application to Philippine round-leaf bats.

    PubMed

    Esselstyn, Jacob A; Evans, Ben J; Sedlock, Jodi L; Anwarali Khan, Faisal Ali; Heaney, Lawrence R

    2012-09-22

    Prospects for a comprehensive inventory of global biodiversity would be greatly improved by automating methods of species delimitation. The general mixed Yule-coalescent (GMYC) was recently proposed as a potential means of increasing the rate of biodiversity exploration. We tested this method with simulated data and applied it to a group of poorly known bats (Hipposideros) from the Philippines. We then used echolocation call characteristics to evaluate the plausibility of species boundaries suggested by GMYC. In our simulations, GMYC performed relatively well (errors in estimated species diversity less than 25%) when the product of the haploid effective population size (N(e)) and speciation rate (SR; per lineage per million years) was less than or equal to 10(5), while interspecific variation in N(e) was twofold or less. However, at higher but also biologically relevant values of N(e) × SR and when N(e) varied tenfold among species, performance was very poor. GMYC analyses of mitochondrial DNA sequences from Philippine Hipposideros suggest actual diversity may be approximately twice the current estimate, and available echolocation call data are mostly consistent with GMYC delimitations. In conclusion, we consider the GMYC model useful under some conditions, but additional information on N(e), SR and/or corroboration from independent character data are needed to allow meaningful interpretation of results.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbee, D; McCarthy, A; Galavis, P

    Purpose: Errors found during initial physics plan checks frequently require replanning and reprinting, resulting decreased departmental efficiency. Additionally, errors may be missed during physics checks, resulting in potential treatment errors or interruption. This work presents a process control created using the Eclipse Scripting API (ESAPI) enabling dosimetrists and physicists to detect potential errors in the Eclipse treatment planning system prior to performing any plan approvals or printing. Methods: Potential failure modes for five categories were generated based on available ESAPI (v11) patient object properties: Images, Contours, Plans, Beams, and Dose. An Eclipse script plugin (PlanCheck) was written in C# tomore » check errors most frequently observed clinically in each of the categories. The PlanCheck algorithms were devised to check technical aspects of plans, such as deliverability (e.g. minimum EDW MUs), in addition to ensuring that policy and procedures relating to planning were being followed. The effect on clinical workflow efficiency was measured by tracking the plan document error rate and plan revision/retirement rates in the Aria database over monthly intervals. Results: The number of potential failure modes the PlanCheck script is currently capable of checking for in the following categories: Images (6), Contours (7), Plans (8), Beams (17), and Dose (4). Prior to implementation of the PlanCheck plugin, the observed error rates in errored plan documents and revised/retired plans in the Aria database was 20% and 22%, respectively. Error rates were seen to decrease gradually over time as adoption of the script improved. Conclusion: A process control created using the Eclipse scripting API enabled plan checks to occur within the planning system, resulting in reduction in error rates and improved efficiency. Future work includes: initiating full FMEA for planning workflow, extending categories to include additional checks outside of ESAPI via Aria database queries, and eventual automated plan checks.« less

  5. Bit-error rate for free-space adaptive optics laser communications.

    PubMed

    Tyson, Robert K

    2002-04-01

    An analysis of adaptive optics compensation for atmospheric-turbulence-induced scintillation is presented with the figure of merit being the laser communications bit-error rate. The formulation covers weak, moderate, and strong turbulence; on-off keying; and amplitude-shift keying, over horizontal propagation paths or on a ground-to-space uplink or downlink. The theory shows that under some circumstances the bit-error rate can be improved by a few orders of magnitude with the addition of adaptive optics to compensate for the scintillation. Low-order compensation (less than 40 Zernike modes) appears to be feasible as well as beneficial for reducing the bit-error rate and increasing the throughput of the communication link.

  6. Transcriptional fidelities of human mitochondrial POLRMT, yeast mitochondrial Rpo41, and phage T7 single-subunit RNA polymerases.

    PubMed

    Sultana, Shemaila; Solotchi, Mihai; Ramachandran, Aparna; Patel, Smita S

    2017-11-03

    Single-subunit RNA polymerases (RNAPs) are present in phage T7 and in mitochondria of all eukaryotes. This RNAP class plays important roles in biotechnology and cellular energy production, but we know little about its fidelity and error rates. Herein, we report the error rates of three single-subunit RNAPs measured from the catalytic efficiencies of correct and all possible incorrect nucleotides. The average error rates of T7 RNAP (2 × 10 -6 ), yeast mitochondrial Rpo41 (6 × 10 -6 ), and human mitochondrial POLRMT (RNA polymerase mitochondrial) (2 × 10 -5 ) indicate high accuracy/fidelity of RNA synthesis resembling those of replicative DNA polymerases. All three RNAPs exhibit a distinctly high propensity for GTP misincorporation opposite dT, predicting frequent A→G errors in RNA with rates of ∼10 -4 The A→C, G→A, A→U, C→U, G→U, U→C, and U→G errors mostly due to pyrimidine-purine mismatches were relatively frequent (10 -5 -10 -6 ), whereas C→G, U→A, G→C, and C→A errors from purine-purine and pyrimidine-pyrimidine mismatches were rare (10 -7 -10 -10 ). POLRMT also shows a high C→A error rate on 8-oxo-dG templates (∼10 -4 ). Strikingly, POLRMT shows a high mutagenic bypass rate, which is exacerbated by TEFM (transcription elongation factor mitochondrial). The lifetime of POLRMT on terminally mismatched elongation substrate is increased in the presence of TEFM, which allows POLRMT to efficiently bypass the error and continue with transcription. This investigation of nucleotide selectivity on normal and oxidatively damaged DNA by three single-subunit RNAPs provides the basic information to understand the error rates in mitochondria and, in the case of T7 RNAP, to assess the quality of in vitro transcribed RNAs. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  7. Presence in Mediterranean hotspots and floral symmetry affect speciation and extinction rates in Proteaceae.

    PubMed

    Reyes, Elisabeth; Morlon, Hélène; Sauquet, Hervé

    2015-07-01

    The Proteaceae is a large angiosperm family displaying the common pattern of uneven distribution of species among genera. Previous studies have shown that this disparity is a result of variation in diversification rates across lineages, but the reasons for this variation are still unclear. Here, we tested the impact of floral symmetry and occurrence in Mediterranean climate regions on speciation and extinction rates in the Proteaceae. A rate shift analysis was conducted on dated genus-level phylogenetic trees of the Proteaceae. Character-dependent analyses were used to test for differences in diversification rates between actinomorphic and zygomorphic lineages and between lineages located within or outside Mediterranean climate regions. The rate shift analysis identified 5-10 major diversification rate shifts in the Proteaceae tree. The character-dependent analyses showed that speciation rates, extinction rates and net diversification rates of the Proteaceae were significantly higher for lineages occurring in Mediterranean hotspots. Higher speciation and extinction rates were also detected for zygomorphic species, but net diversification rates appeared to be similar in actinomorphic and zygomorphic Proteaceae. Presence in Mediterranean hotspots favors Proteaceae diversification. In contrast with observations at the scale of angiosperms, floral symmetry is not a trait that strongly influences their evolutionary success. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.

  8. Fruit evolution and diversification in campanulid angiosperms.

    PubMed

    Beaulieu, Jeremy M; Donoghue, Michael J

    2013-11-01

    With increases in both the size and scope of phylogenetic trees, we are afforded a renewed opportunity to address long-standing comparative questions, such as whether particular fruit characters account for much of the variation in diversity among flowering plant clades. Studies to date have reported conflicting results, largely as a consequence of taxonomic scale and a reliance on potentially conservative statistical measures. Here we examine a larger and older angiosperm clade, the Campanulidae, and infer the rates of character transitions among the major fruit types, emphasizing the evolution of the achene fruits that are most frequently observed within the group. Our analyses imply that campanulids likely originated bearing capsules, and that all subsequent fruit diversity was derived from various modifications of this dry fruit type. We also found that the preponderance of lineages bearing achenes is a consequence of not only being a fruit type that is somewhat irreversible once it evolves, but one that also seems to have a positive association with diversification rates. Although these results imply the achene fruit type is a significant correlate of diversity patterns observed across campanulids, we conclude that it remains difficult to confidently and directly view this character state as the actual cause of increased diversification rates. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  9. A comparison of medication administration errors from original medication packaging and multi-compartment compliance aids in care homes: A prospective observational study.

    PubMed

    Gilmartin-Thomas, Julia Fiona-Maree; Smith, Felicity; Wolfe, Rory; Jani, Yogini

    2017-07-01

    No published study has been specifically designed to compare medication administration errors between original medication packaging and multi-compartment compliance aids in care homes, using direct observation. Compare the effect of original medication packaging and multi-compartment compliance aids on medication administration accuracy. Prospective observational. Ten Greater London care homes. Nurses and carers administering medications. Between October 2014 and June 2015, a pharmacist researcher directly observed solid, orally administered medications in tablet or capsule form at ten purposively sampled care homes (five only used original medication packaging and five used both multi-compartment compliance aids and original medication packaging). The medication administration error rate was calculated as the number of observed doses administered (or omitted) in error according to medication administration records, compared to the opportunities for error (total number of observed doses plus omitted doses). Over 108.4h, 41 different staff (35 nurses, 6 carers) were observed to administer medications to 823 residents during 90 medication administration rounds. A total of 2452 medication doses were observed (1385 from original medication packaging, 1067 from multi-compartment compliance aids). One hundred and seventy eight medication administration errors were identified from 2493 opportunities for error (7.1% overall medication administration error rate). A greater medication administration error rate was seen for original medication packaging than multi-compartment compliance aids (9.3% and 3.1% respectively, risk ratio (RR)=3.9, 95% confidence interval (CI) 2.4 to 6.1, p<0.001). Similar differences existed when comparing medication administration error rates between original medication packaging (from original medication packaging-only care homes) and multi-compartment compliance aids (RR=2.3, 95%CI 1.1 to 4.9, p=0.03), and between original medication packaging and multi-compartment compliance aids within care homes that used a combination of both medication administration systems (RR=4.3, 95%CI 2.7 to 6.8, p<0.001). A significant difference in error rate was not observed between use of a single or combination medication administration system (p=0.44). The significant difference in, and high overall, medication administration error rate between original medication packaging and multi-compartment compliance aids supports the use of the latter in care homes, as well as local investigation of tablet and capsule impact on medication administration errors and staff training to prevent errors occurring. As a significant difference in error rate was not observed between use of a single or combination medication administration system, common practice of using both multi-compartment compliance aids (for most medications) and original packaging (for medications with stability issues) is supported. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. The threshold vs LNT showdown: Dose rate findings exposed flaws in the LNT model part 2. How a mistake led BEIR I to adopt LNT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calabrese, Edward J., E-mail: edwardc@schoolph.uma

    This paper reveals that nearly 25 years after the used Russell's dose-rate data to support the adoption of the linear-no-threshold (LNT) dose response model for genetic and cancer risk assessment, Russell acknowledged a significant under-reporting of the mutation rate of the historical control group. This error, which was unknown to BEIR I, had profound implications, leading it to incorrectly adopt the LNT model, which was a decision that profoundly changed the course of risk assessment for radiation and chemicals to the present. -- Highlights: • The BEAR I Genetics Panel made an error in denying dose rate for mutation. •more » The BEIR I Genetics Subcommittee attempted to correct this dose rate error. • The control group used for risk assessment by BEIR I is now known to be in error. • Correcting this error contradicts the LNT, supporting a threshold model.« less

  11. Stereotype threat prevents perceptual learning

    PubMed Central

    Shiffrin, Richard M.; Boucher, Kathryn L.; Van Loo, Katie; Rydell, Michael T.

    2010-01-01

    Stereotype threat (ST) refers to a situation in which a member of a group fears that her or his performance will validate an existing negative performance stereotype, causing a decrease in performance. For example, reminding women of the stereotype “women are bad at math” causes them to perform more poorly on math questions from the SAT and GRE. Performance deficits can be of several types and be produced by several mechanisms. We show that ST prevents perceptual learning, defined in our task as an increasing rate of search for a target Chinese character in a display of such characters. Displays contained two or four characters and half of these contained a target. Search rate increased across a session of training for a control group of women, but not women under ST. Speeding of search is typically explained in terms of learned “popout” (automatic attraction of attention to a target). Did women under ST learn popout but fail to express it? Following training, the women were shown two colored squares and asked to choose the one with the greater color saturation. Superimposed on the squares were task-irrelevant Chinese characters. For women not trained under ST, the presence of a trained target on one square slowed responding, indicating that training had caused the learning of an attention response to targets. Women trained under ST showed no slowing, indicating that they had not learned such an attention response. PMID:20660737

  12. Impact of automated dispensing cabinets on medication selection and preparation error rates in an emergency department: a prospective and direct observational before-and-after study.

    PubMed

    Fanning, Laura; Jones, Nick; Manias, Elizabeth

    2016-04-01

    The implementation of automated dispensing cabinets (ADCs) in healthcare facilities appears to be increasing, in particular within Australian hospital emergency departments (EDs). While the investment in ADCs is on the increase, no studies have specifically investigated the impacts of ADCs on medication selection and preparation error rates in EDs. Our aim was to assess the impact of ADCs on medication selection and preparation error rates in an ED of a tertiary teaching hospital. Pre intervention and post intervention study involving direct observations of nurses completing medication selection and preparation activities before and after the implementation of ADCs in the original and new emergency departments within a 377-bed tertiary teaching hospital in Australia. Medication selection and preparation error rates were calculated and compared between these two periods. Secondary end points included the impact on medication error type and severity. A total of 2087 medication selection and preparations were observed among 808 patients pre and post intervention. Implementation of ADCs in the new ED resulted in a 64.7% (1.96% versus 0.69%, respectively, P = 0.017) reduction in medication selection and preparation errors. All medication error types were reduced in the post intervention study period. There was an insignificant impact on medication error severity as all errors detected were categorised as minor. The implementation of ADCs could reduce medication selection and preparation errors and improve medication safety in an ED setting. © 2015 John Wiley & Sons, Ltd.

  13. Teamwork and clinical error reporting among nurses in Korean hospitals.

    PubMed

    Hwang, Jee-In; Ahn, Jeonghoon

    2015-03-01

    To examine levels of teamwork and its relationships with clinical error reporting among Korean hospital nurses. The study employed a cross-sectional survey design. We distributed a questionnaire to 674 nurses in two teaching hospitals in Korea. The questionnaire included items on teamwork and the reporting of clinical errors. We measured teamwork using the Teamwork Perceptions Questionnaire, which has five subscales including team structure, leadership, situation monitoring, mutual support, and communication. Using logistic regression analysis, we determined the relationships between teamwork and error reporting. The response rate was 85.5%. The mean score of teamwork was 3.5 out of 5. At the subscale level, mutual support was rated highest, while leadership was rated lowest. Of the participating nurses, 522 responded that they had experienced at least one clinical error in the last 6 months. Among those, only 53.0% responded that they always or usually reported clinical errors to their managers and/or the patient safety department. Teamwork was significantly associated with better error reporting. Specifically, nurses with a higher team communication score were more likely to report clinical errors to their managers and the patient safety department (odds ratio = 1.82, 95% confidence intervals [1.05, 3.14]). Teamwork was rated as moderate and was positively associated with nurses' error reporting performance. Hospital executives and nurse managers should make substantial efforts to enhance teamwork, which will contribute to encouraging the reporting of errors and improving patient safety. Copyright © 2015. Published by Elsevier B.V.

  14. Determination of Type I Error Rates and Power of Answer Copying Indices under Various Conditions

    ERIC Educational Resources Information Center

    Yormaz, Seha; Sünbül, Önder

    2017-01-01

    This study aims to determine the Type I error rates and power of S[subscript 1] , S[subscript 2] indices and kappa statistic at detecting copying on multiple-choice tests under various conditions. It also aims to determine how copying groups are created in order to calculate how kappa statistics affect Type I error rates and power. In this study,…

  15. Can a two-hour lecture by a pharmacist improve the quality of prescriptions in a pediatric hospital? A retrospective cohort study.

    PubMed

    Vairy, Stephanie; Corny, Jennifer; Jamoulle, Olivier; Levy, Arielle; Lebel, Denis; Carceller, Ana

    2017-12-01

    A high rate of prescription errors exists in pediatric teaching hospitals, especially during initial training. To determine the effectiveness of a two-hour lecture by a pharmacist on rates of prescription errors and quality of prescriptions. A two-hour lecture led by a pharmacist was provided to 11 junior pediatric residents (PGY-1) as part of a one-month immersion program. A control group included 15 residents without the intervention. We reviewed charts to analyze the first 50 prescriptions of each resident. Data were collected from 1300 prescriptions involving 451 patients, 550 in the intervention group and 750 in the control group. The rate of prescription errors in the intervention group was 9.6% compared to 11.3% in the control group (p=0.32), affecting 106 patients. Statistically significant differences between both groups were prescriptions with unwritten doses (p=0.01) and errors involving overdosing (p=0.04). We identified many errors as well as issues surrounding quality of prescriptions. We found a 10.6% prescription error rate. This two-hour lecture seems insufficient to reduce prescription errors among junior pediatric residents. This study highlights the most frequent types of errors and prescription quality issues that should be targeted by future educational interventions.

  16. Zero tolerance prescribing: a strategy to reduce prescribing errors on the paediatric intensive care unit.

    PubMed

    Booth, Rachelle; Sturgess, Emma; Taberner-Stokes, Alison; Peters, Mark

    2012-11-01

    To establish the baseline prescribing error rate in a tertiary paediatric intensive care unit (PICU) and to determine the impact of a zero tolerance prescribing (ZTP) policy incorporating a dedicated prescribing area and daily feedback of prescribing errors. A prospective, non-blinded, observational study was undertaken in a 12-bed tertiary PICU over a period of 134 weeks. Baseline prescribing error data were collected on weekdays for all patients for a period of 32 weeks, following which the ZTP policy was introduced. Daily error feedback was introduced after a further 12 months. Errors were sub-classified as 'clinical', 'non-clinical' and 'infusion prescription' errors and the effects of interventions considered separately. The baseline combined prescribing error rate was 892 (95 % confidence interval (CI) 765-1,019) errors per 1,000 PICU occupied bed days (OBDs), comprising 25.6 % clinical, 44 % non-clinical and 30.4 % infusion prescription errors. The combined interventions of ZTP plus daily error feedback were associated with a reduction in the combined prescribing error rate to 447 (95 % CI 389-504) errors per 1,000 OBDs (p < 0.0001), an absolute risk reduction of 44.5 % (95 % CI 40.8-48.0 %). Introduction of the ZTP policy was associated with a significant decrease in clinical and infusion prescription errors, while the introduction of daily error feedback was associated with a significant reduction in non-clinical prescribing errors. The combined interventions of ZTP and daily error feedback were associated with a significant reduction in prescribing errors in the PICU, in line with Department of Health requirements of a 40 % reduction within 5 years.

  17. Frequency and analysis of non-clinical errors made in radiology reports using the National Integrated Medical Imaging System voice recognition dictation software.

    PubMed

    Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O

    2016-11-01

    Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.

  18. Message framing and color combination in the perception of medical information.

    PubMed

    Chien, Yu-Hung

    2011-04-01

    A 2 x 2 between-subjects design was used to examine the effects of message framing (gain vs loss) and color combination (red background with white characters vs white background with black characters) on 120 university students' perception of materials promoting the H1N1 flu vaccine and their willingness to receive the vaccine after they had read the materials. Each participant completed a 6-item questionnaire, and the results of an analysis of variance showed that participants rated vaccine information presented through loss-framed messages as having greater interest and leading to greater understanding. Loss-framed messages presented on a white background with black characters significantly increased the willingness of the participants to receive the vaccine.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berkelbach, Timothy C., E-mail: tcb2112@columbia.edu; Reichman, David R., E-mail: drr2103@columbia.edu; Hybertsen, Mark S., E-mail: mhyberts@bnl.gov

    We extend our previous work on singlet exciton fission in isolated dimers to the case of crystalline materials, focusing on pentacene as a canonical and concrete example. We discuss the proper interpretation of the character of low-lying excited states of relevance to singlet fission. In particular, we consider a variety of metrics for measuring charge-transfer character, conclusively demonstrating significant charge-transfer character in the low-lying excited states. The impact of this electronic structure on the subsequent singlet fission dynamics is assessed by performing real-time master-equation calculations involving hundreds of quantum states. We make direct comparisons with experimental absorption spectra and singletmore » fission rates, finding good quantitative agreement in both cases, and we discuss the mechanistic distinctions that exist between small isolated aggregates and bulk systems.« less

  20. Addressing Angular Single-Event Effects in the Estimation of On-Orbit Error Rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.

    2015-12-01

    Our study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. Moreover, this paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. Finally, these techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.

  1. Reducing the Familiarity of Conjunction Lures with Pictures

    ERIC Educational Resources Information Center

    Lloyd, Marianne E.

    2013-01-01

    Four experiments were conducted to test whether conjunction errors were reduced after pictorial encoding and whether the semantic overlap between study and conjunction items would impact error rates. Across 4 experiments, compound words studied with a single-picture had lower conjunction error rates during a recognition test than those words…

  2. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND... rates, which is defined as the percentage of cases with an error (expressed as the total number of cases with an error compared to the total number of cases); the percentage of cases with an improper payment...

  3. Certification of ICI 1012 optical data storage tape

    NASA Technical Reports Server (NTRS)

    Howell, J. M.

    1993-01-01

    ICI has developed a unique and novel method of certifying a Terabyte optical tape. The tape quality is guaranteed as a statistical upper limit on the probability of uncorrectable errors. This is called the Corrected Byte Error Rate or CBER. We developed this probabilistic method because of two reasons why error rate cannot be measured directly. Firstly, written data is indelible, so one cannot employ write/read tests such as used for magnetic tape. Secondly, the anticipated error rates need impractically large samples to measure accurately. For example, a rate of 1E-12 implies only one byte in error per tape. The archivability of ICI 1012 Data Storage Tape in general is well characterized and understood. Nevertheless, customers expect performance guarantees to be supported by test results on individual tapes. In particular, they need assurance that data is retrievable after decades in archive. This paper describes the mathematical basis, measurement apparatus and applicability of the certification method.

  4. Microgravity Foam Structure and Rheology

    NASA Technical Reports Server (NTRS)

    Durian, Douglas J.

    1997-01-01

    To exploit rheological and multiple-light scattering techniques, and ultimately microgravity conditions, in order to quantify and elucidate the unusual elastic character of foams in terms of their underlying microscopic structure and dynamics. Special interest is in determining how this elastic character vanishes, i.e. how the foam melts into a simple viscous liquid, as a function of both increasing liquid content and shear strain rate. The unusual elastic character of foams will be quantified macroscopically by measurement of the shear stress as a function of static shear strain, shear strain rate, and time following a step strain; such data will be analyzed in terms of a yield stress, a static shear modulus, and dynamical time scales. Microscopic information about bubble packing and rearrangement dynamics, from which these macroscopic non-Newtonian properties presumably arise, will be obtained non-invasively by novel multiple-light scattering diagnostics such as Diffusing-Wave Spectroscopy (DWS). Quantitative trends with materials parameters, such as average bubble size, and liquid content, will be sought in order to elucidate the fundamental connection between the microscopic structure and dynamics and the macroscopic rheology.

  5. Cross-cultural differences in children's choices, categorizations, and evaluations of truths and lies.

    PubMed

    Fu, Genyue; Xu, Fen; Cameron, Catherine Ann; Leyman, Gail; Lee, Kang

    2007-03-01

    This study examined cross-cultural differences and similarities in children's moral understanding of individual- or collective-oriented lies and truths. Seven-, 9-, and 11-year-old Canadian and Chinese children were read stories about story characters facing moral dilemmas about whether to lie or tell the truth to help a group but harm an individual or vice versa. Participants chose to lie or to tell the truth as if they were the character (Experiments 1 and 2) and categorized and evaluated the story characters' truthful and untruthful statements (Experiments 3 and 4). Most children in both cultures labeled lies as lies and truths as truths. The major cultural differences lay in choices and moral evaluations. Chinese children chose lying to help a collective but harm an individual, and they rated it less negatively than lying with opposite consequences. Chinese children rated truth telling to help an individual but harm a group less positively than the alternative. Canadian children did the opposite. These findings suggest that cross-cultural differences in emphasis on groups versus individuals affect children's choices and moral judgments about truth and deception.

  6. Hybrid neuro-fuzzy approach for automatic vehicle license plate recognition

    NASA Astrophysics Data System (ADS)

    Lee, Hsi-Chieh; Jong, Chung-Shi

    1998-03-01

    Most currently available vehicle identification systems use techniques such as R.F., microwave, or infrared to help identifying the vehicle. Transponders are usually installed in the vehicle in order to transmit the corresponding information to the sensory system. It is considered expensive to install a transponder in each vehicle and the malfunction of the transponder will result in the failure of the vehicle identification system. In this study, novel hybrid approach is proposed for automatic vehicle license plate recognition. A system prototype is built which can be used independently or cooperating with current vehicle identification system in identifying a vehicle. The prototype consists of four major modules including the module for license plate region identification, the module for character extraction from the license plate, the module for character recognition, and the module for the SimNet neuro-fuzzy system. To test the performance of the proposed system, three hundred and eighty vehicle image samples are taken by a digital camera. The license plate recognition success rate of the prototype is approximately 91% while the character recognition success rate of the prototype is approximately 97%.

  7. Homoplastic microinversions and the avian tree of life

    PubMed Central

    2011-01-01

    Background Microinversions are cytologically undetectable inversions of DNA sequences that accumulate slowly in genomes. Like many other rare genomic changes (RGCs), microinversions are thought to be virtually homoplasy-free evolutionary characters, suggesting that they may be very useful for difficult phylogenetic problems such as the avian tree of life. However, few detailed surveys of these genomic rearrangements have been conducted, making it difficult to assess this hypothesis or understand the impact of microinversions upon genome evolution. Results We surveyed non-coding sequence data from a recent avian phylogenetic study and found substantially more microinversions than expected based upon prior information about vertebrate inversion rates, although this is likely due to underestimation of these rates in previous studies. Most microinversions were lineage-specific or united well-accepted groups. However, some homoplastic microinversions were evident among the informative characters. Hemiplasy, which reflects differences between gene trees and the species tree, did not explain the observed homoplasy. Two specific loci were microinversion hotspots, with high numbers of inversions that included both the homoplastic as well as some overlapping microinversions. Neither stem-loop structures nor detectable sequence motifs were associated with microinversions in the hotspots. Conclusions Microinversions can provide valuable phylogenetic information, although power analysis indicates that large amounts of sequence data will be necessary to identify enough inversions (and similar RGCs) to resolve short branches in the tree of life. Moreover, microinversions are not perfect characters and should be interpreted with caution, just as with any other character type. Independent of their use for phylogenetic analyses, microinversions are important because they have the potential to complicate alignment of non-coding sequences. Despite their low rate of accumulation, they have clearly contributed to genome evolution, suggesting that active identification of microinversions will prove useful in future phylogenomic studies. PMID:21612607

  8. Total energy based flight control system

    NASA Technical Reports Server (NTRS)

    Lambregts, Antonius A. (Inventor)

    1985-01-01

    An integrated aircraft longitudinal flight control system uses a generalized thrust and elevator command computation (38), which accepts flight path angle, longitudinal acceleration command signals, along with associated feedback signals, to form energy rate error (20) and energy rate distribution error (18) signals. The engine thrust command is developed (22) as a function of the energy rate distribution error and the elevator position command is developed (26) as a function of the energy distribution error. For any vertical flight path and speed mode the outerloop errors are normalized (30, 34) to produce flight path angle and longitudinal acceleration commands. The system provides decoupled flight path and speed control for all control modes previously provided by the longitudinal autopilot, autothrottle and flight management systems.

  9. A comparison of endoscopic localization error rate between operating surgeons and referring endoscopists in colorectal cancer.

    PubMed

    Azin, Arash; Saleh, Fady; Cleghorn, Michelle; Yuen, Andrew; Jackson, Timothy; Okrainec, Allan; Quereshy, Fayez A

    2017-03-01

    Colonoscopy for colorectal cancer (CRC) has a localization error rate as high as 21 %. Such errors can have substantial clinical consequences, particularly in laparoscopic surgery. The primary objective of this study was to compare accuracy of tumor localization at initial endoscopy performed by either the operating surgeon or non-operating referring endoscopist. All patients who underwent surgical resection for CRC at a large tertiary academic hospital between January 2006 and August 2014 were identified. The exposure of interest was the initial endoscopist: (1) surgeon who also performed the definitive operation (operating surgeon group); and (2) referring gastroenterologist or general surgeon (referring endoscopist group). The outcome measure was localization error, defined as a difference in at least one anatomic segment between initial endoscopy and final operative location. Multivariate logistic regression was used to explore the association between localization error rate and the initial endoscopist. A total of 557 patients were included in the study; 81 patients in the operating surgeon cohort and 476 patients in the referring endoscopist cohort. Initial diagnostic colonoscopy performed by the operating surgeon compared to referring endoscopist demonstrated statistically significant lower intraoperative localization error rate (1.2 vs. 9.0 %, P = 0.016); shorter mean time from endoscopy to surgery (52.3 vs. 76.4 days, P = 0.015); higher tattoo localization rate (32.1 vs. 21.0 %, P = 0.027); and lower preoperative repeat endoscopy rate (8.6 vs. 40.8 %, P < 0.001). Initial endoscopy performed by the operating surgeon was protective against localization error on both univariate analysis, OR 7.94 (95 % CI 1.08-58.52; P = 0.016), and multivariate analysis, OR 7.97 (95 % CI 1.07-59.38; P = 0.043). This study demonstrates that diagnostic colonoscopies performed by an operating surgeon are independently associated with a lower localization error rate. Further research exploring the factors influencing localization accuracy and why operating surgeons have lower error rates relative to non-operating endoscopists is necessary to understand differences in care.

  10. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review.

    PubMed

    Mathes, Tim; Klaßen, Pauline; Pieper, Dawid

    2017-11-28

    Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.

  11. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  12. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  13. Antiretroviral medication prescribing errors are common with hospitalization of HIV-infected patients.

    PubMed

    Commers, Tessa; Swindells, Susan; Sayles, Harlan; Gross, Alan E; Devetten, Marcel; Sandkovsky, Uriel

    2014-01-01

    Errors in prescribing antiretroviral therapy (ART) often occur with the hospitalization of HIV-infected patients. The rapid identification and prevention of errors may reduce patient harm and healthcare-associated costs. A retrospective review of hospitalized HIV-infected patients was carried out between 1 January 2009 and 31 December 2011. Errors were documented as omission, underdose, overdose, duplicate therapy, incorrect scheduling and/or incorrect therapy. The time to error correction was recorded. Relative risks (RRs) were computed to evaluate patient characteristics and error rates. A total of 289 medication errors were identified in 146/416 admissions (35%). The most common was drug omission (69%). At an error rate of 31%, nucleoside reverse transcriptase inhibitors were associated with an increased risk of error when compared with protease inhibitors (RR 1.32; 95% CI 1.04-1.69) and co-formulated drugs (RR 1.59; 95% CI 1.19-2.09). Of the errors, 31% were corrected within the first 24 h, but over half (55%) were never remedied. Admissions with an omission error were 7.4 times more likely to have all errors corrected within 24 h than were admissions without an omission. Drug interactions with ART were detected on 51 occasions. For the study population (n = 177), an increased risk of admission error was observed for black (43%) compared with white (28%) individuals (RR 1.53; 95% CI 1.16-2.03) but no significant differences were observed between white patients and other minorities or between men and women. Errors in inpatient ART were common, and the majority were never detected. The most common errors involved omission of medication, and nucleoside reverse transcriptase inhibitors had the highest rate of prescribing error. Interventions to prevent and correct errors are urgently needed.

  14. A study of payload specialist station monitor size constraints. [space shuttle orbiters

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, M., III; Shields, N. L., Jr.; Malone, T. B.

    1975-01-01

    Constraints on the CRT display size for the shuttle orbiter cabin are studied. The viewing requirements placed on these monitors were assumed to involve display of imaged scenes providing visual feedback during payload operations and display of alphanumeric characters. Data on target recognition/resolution, target recognition, and range rate detection by human observers were utilized to determine viewing requirements for imaged scenes. Field-of-view and acuity requirements for a variety of payload operations were obtained along with the necessary detection capability in terms of range-to-target size ratios. The monitor size necessary to meet the acuity requirements was established. An empirical test was conducted to determine required recognition sizes for displayed alphanumeric characters. The results of the test were used to determine the number of characters which could be simultaneously displayed based on the recognition size requirements using the proposed monitor size. A CRT display of 20 x 20 cm is recommended. A portion of the display area is used for displaying imaged scenes and the remaining display area is used for alphanumeric characters pertaining to the displayed scene. The entire display is used for the character alone mode.

  15. Online Error Reporting for Managing Quality Control Within Radiology.

    PubMed

    Golnari, Pedram; Forsberg, Daniel; Rosipko, Beverly; Sunshine, Jeffrey L

    2016-06-01

    Information technology systems within health care, such as picture archiving and communication system (PACS) in radiology, can have a positive impact on production but can also risk compromising quality. The widespread use of PACS has removed the previous feedback loop between radiologists and technologists. Instead of direct communication of quality discrepancies found for an examination, the radiologist submitted a paper-based quality-control report. A web-based issue-reporting tool can help restore some of the feedback loop and also provide possibilities for more detailed analysis of submitted errors. The purpose of this study was to evaluate the hypothesis that data from use of an online error reporting software for quality control can focus our efforts within our department. For the 372,258 radiologic examinations conducted during the 6-month period study, 930 errors (390 exam protocol, 390 exam validation, and 150 exam technique) were submitted, corresponding to an error rate of 0.25 %. Within the category exam protocol, technologist documentation had the highest number of submitted errors in ultrasonography (77 errors [44 %]), while imaging protocol errors were the highest subtype error for computed tomography modality (35 errors [18 %]). Positioning and incorrect accession had the highest errors in the exam technique and exam validation error category, respectively, for nearly all of the modalities. An error rate less than 1 % could signify a system with a very high quality; however, a more likely explanation is that not all errors were detected or reported. Furthermore, staff reception of the error reporting system could also affect the reporting rate.

  16. Growth, smoltification, and smolt-to-adult return of spring chinook salmon from hatcheries on the Deschutes river, Oregon

    USGS Publications Warehouse

    Beckman, B.R.; Dickhoff, Walton W.; Zaugg, W.S.; Sharpe, C.; Hirtzel, S.; Schrock, R.; Larsen, D.A.; Ewing, R.D.; Palmisano, A.; Schreck, C.B.; Mahnken, C.V.W.

    1999-01-01

    The relationship between smoltification and smolt-to-adult return (SAR) of spring chinook salmon Oncorhynchus tshawytscha from the Deschutes River, Oregon, was examined for four release groups in each of three successive years. Fish were reared, marked with coded wire tags, and released from Round Butte Hatchery, Pelton Ladder rearing facility, and Warm Springs National Fish Hatchery. Smolt releases occurred in nearly the same place at similar times, allowing a direct comparison of SAR to several characters representing smolt quality. Return rates varied significantly among facilities, varying over an order of magnitude each year. The highest average SAR was from Pelton Ladder, the lowest was from Warm Springs. Each of the characters used as metrics of smoltification - fish size, spring growth rate (February-April), condition factor, plasma hormone concentration (thyroxine, cortisol, and insulin-like growth factor-I [IGF-I]), stress challenge, gill Na+,K+-ATPase activity, and liver glycogen concentration - varied significantly among facilities and seasonally within hatchery groups. However, only spring growth rate, gill ATPase activity, and plasma IGF-I concentration showed significant relationships to SAR. These characters and SAR itself were consistently lower for fish released from Warm Springs Hatchery than for fish from Round Butte Hatchery and Pelton Ladder. This demonstrates that differences in the quality of fish released by facilities may have profound effects on subsequent survival and suggests that manipulations of spring growth rate may be used to influence the quality of smolts released from facilities.

  17. The Junior Temperament and Character Inventory (JTCI): Psychometric properties of multi-informant ratings.

    PubMed

    Boson, Karin; Brändström, Sven; Sigvardsson, Sören

    2018-04-01

    The aims of the study were (a) to establish norms for the Swedish child self-report and caregiver rating versions of the Junior and Temperament Character Inventory (JTCI) among young adolescents, (b) to investigate its psychometric properties, and (c) to investigate congruence between children's self-reports and caregivers' ratings of a child's personality. The sample was a general population of 1,046 children ages 12-14 years and 654 caregivers. The JTCI was found to be reliable on all dimensions except Persistence in the child self-report version. Caregivers rated their own children's personalities as more mature than did the children themselves. Caregivers especially overestimated their daughters' self-reported capabilities for self-acceptance and self-efficacy and might have underestimated their daughters' need for emotional support. This highlights the importance of including the child's self-report on personality in both research and clinical assessments. The results also support the importance of age- and gender-separated norms. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. [Studies on the shade-endurance capacity of Glycyrrhiza uralensis].

    PubMed

    Wei, Sheng-li; Wang, Wen-quan; Chen, Xiu-hua; Qin, Shu-ying; Chen, Xiu-tian

    2005-01-01

    To study the shade-endurance property of Glycyrrhiza uralensis and provide rationale for the practice of inter-cropping G. uralensis with trees. Black shading nets were used to provide five different environments of light intensities (light penetration rates of 100%, 75%, 65%, 50% and 25%, respectively). To assess the shade-endurance capacity of G. uralensis, several aspects were evaluated, including growth characters, physiological and ecological characters, biomass, and chemical contents. G. uralensis is a light-favored plant. The growth indices such as plant height, stem diameter, leaves number, root diameter, biomass, and daily average photosynthetic rate (Pn) are highest when light permeation rate is 100%. All these indices decrease when light intensity decreases. However, G. uralensis possesses shade-endurance capacity to some degree; it adapts to the shading environment by increasing the leaf area and chlorophyll contents. Shading has no obvious effect on the absolute light energy utilization rate (Eu) or Fv/Fm ratio. The influence of shading on the chemical contents of G. uralensis is obvious.

  19. The relationship between temperament and character in conversion disorder and comorbid depression.

    PubMed

    Erten, Evrim; Yenilmez, Yelda; Fistikci, Nurhan; Saatcioglu, Omer

    2013-05-01

    The aim of this study was to compare conversion disorder patients with healthy controls in terms of temperament and character, and to determine the effect of these characteristics on comorbid depression, based on the idea that conversion disorder patients may have distinctive temperament and character qualities. The study involved 58 patients diagnosed with conversion disorder, based on the DSM-IV diagnostic criteria, under observation at the Bakırköy Psychiatric and Neurological Disorders Outpatient Center, Istanbul. The patients were interviewed with a Structured Clinical Interview (SCID-I) and 57 healthy volunteers, matched for age, sex and education level, were interviewed with a Structured Clinical Interview for people without a psychiatric disorder (SCID-I/NP). All the participants completed a sociodemographic form, the Hamilton Depression Rating Scale, the Hamilton Anxiety Scale and the Temperament and Character Inventory. The conversion disorder patients displayed more harm avoidance (P<.001), more impulsivity (P<.01) and more sentimentality (P<.01) than the healthy controls, but were less persistent (P<.05). In terms of character qualities, conversion disorder patients had high self-transcendence (P<.05), but were inadequate in terms of self-directedness (P<.001) and took on less responsibility (P<.05) than the healthy controls. Conversion disorder patients are significantly different from healthy controls on temperament and character measures of harm avoidance, persistence, self-transcendence and self-directedness. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link

    NASA Technical Reports Server (NTRS)

    Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli

    2016-01-01

    We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.

  1. Estimating population genetic parameters and comparing model goodness-of-fit using DNA sequences with error

    PubMed Central

    Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric

    2010-01-01

    It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140

  2. High-speed real-time animated displays on the ADAGE (trademark) RDS 3000 raster graphics system

    NASA Technical Reports Server (NTRS)

    Kahlbaum, William M., Jr.; Ownbey, Katrina L.

    1989-01-01

    Techniques which may be used to increase the animation update rate of real-time computer raster graphic displays are discussed. They were developed on the ADAGE RDS 3000 graphic system in support of the Advanced Concepts Simulator at the NASA Langley Research Center. These techniques involve the use of a special purpose parallel processor, for high-speed character generation. The description of the parallel processor includes the Barrel Shifter which is part of the hardware and is the key to the high-speed character rendition. The final result of this total effort was a fourfold increase in the update rate of an existing primary flight display from 4 to 16 frames per second.

  3. A long-term follow-up evaluation of electronic health record prescribing safety

    PubMed Central

    Abramson, Erika L; Malhotra, Sameer; Osorio, S Nena; Edwards, Alison; Cheriff, Adam; Cole, Curtis; Kaushal, Rainu

    2013-01-01

    Objective To be eligible for incentives through the Electronic Health Record (EHR) Incentive Program, many providers using older or locally developed EHRs will be transitioning to new, commercial EHRs. We previously evaluated prescribing errors made by providers in the first year following transition from a locally developed EHR with minimal prescribing clinical decision support (CDS) to a commercial EHR with robust CDS. Following system refinements, we conducted this study to assess the rates and types of errors 2 years after transition and determine the evolution of errors. Materials and methods We conducted a mixed methods cross-sectional case study of 16 physicians at an academic-affiliated ambulatory clinic from April to June 2010. We utilized standardized prescription and chart review to identify errors. Fourteen providers also participated in interviews. Results We analyzed 1905 prescriptions. The overall prescribing error rate was 3.8 per 100 prescriptions (95% CI 2.8 to 5.1). Error rates were significantly lower 2 years after transition (p<0.001 compared to pre-implementation, 12 weeks and 1 year after transition). Rates of near misses remained unchanged. Providers positively appreciated most system refinements, particularly reduced alert firing. Discussion Our study suggests that over time and with system refinements, use of a commercial EHR with advanced CDS can lead to low prescribing error rates, although more serious errors may require targeted interventions to eliminate them. Reducing alert firing frequency appears particularly important. Our results provide support for federal efforts promoting meaningful use of EHRs. Conclusions Ongoing error monitoring can allow CDS to be optimally tailored and help achieve maximal safety benefits. Clinical Trials Registration ClinicalTrials.gov, Identifier: NCT00603070. PMID:23578816

  4. Prevalence and cost of hospital medical errors in the general and elderly United States populations.

    PubMed

    Mallow, Peter J; Pandya, Bhavik; Horblyuk, Ruslan; Kaplan, Harold S

    2013-12-01

    The primary objective of this study was to quantify the differences in the prevalence rate and costs of hospital medical errors between the general population and an elderly population aged ≥65 years. Methods from an actuarial study of medical errors were modified to identify medical errors in the Premier Hospital Database using data from 2009. Visits with more than four medical errors were removed from the population to avoid over-estimation of cost. Prevalence rates were calculated based on the total number of inpatient visits. There were 3,466,596 total inpatient visits in 2009. Of these, 1,230,836 (36%) occurred in people aged ≥ 65. The prevalence rate was 49 medical errors per 1000 inpatient visits in the general cohort and 79 medical errors per 1000 inpatient visits for the elderly cohort. The top 10 medical errors accounted for more than 80% of the total in the general cohort and the 65+ cohort. The most costly medical error for the general population was postoperative infection ($569,287,000). Pressure ulcers were most costly ($347,166,257) in the elderly population. This study was conducted with a hospital administrative database, and assumptions were necessary to identify medical errors in the database. Further, there was no method to identify errors of omission or misdiagnoses within the database. This study indicates that prevalence of hospital medical errors for the elderly is greater than the general population and the associated cost of medical errors in the elderly population is quite substantial. Hospitals which further focus their attention on medical errors in the elderly population may see a significant reduction in costs due to medical errors as a disproportionate percentage of medical errors occur in this age group.

  5. The effectiveness of risk management program on pediatric nurses' medication error.

    PubMed

    Dehghan-Nayeri, Nahid; Bayat, Fariba; Salehi, Tahmineh; Faghihzadeh, Soghrat

    2013-09-01

    Medication therapy is one of the most complex and high-risk clinical processes that nurses deal with. Medication error is the most common type of error that brings about damage and death to patients, especially pediatric ones. However, these errors are preventable. Identifying and preventing undesirable events leading to medication errors are the main risk management activities. The aim of this study was to investigate the effectiveness of a risk management program on the pediatric nurses' medication error rate. This study is a quasi-experimental one with a comparison group. In this study, 200 nurses were recruited from two main pediatric hospitals in Tehran. In the experimental hospital, we applied the risk management program for a period of 6 months. Nurses of the control hospital did the hospital routine schedule. A pre- and post-test was performed to measure the frequency of the medication error events. SPSS software, t-test, and regression analysis were used for data analysis. After the intervention, the medication error rate of nurses at the experimental hospital was significantly lower (P < 0.001) and the error-reporting rate was higher (P < 0.007) compared to before the intervention and also in comparison to the nurses of the control hospital. Based on the results of this study and taking into account the high-risk nature of the medical environment, applying the quality-control programs such as risk management can effectively prevent the occurrence of the hospital undesirable events. Nursing mangers can reduce the medication error rate by applying risk management programs. However, this program cannot succeed without nurses' cooperation.

  6. An Approach to a Comprehensive Test Framework for Analysis and Evaluation of Text Line Segmentation Algorithms

    PubMed Central

    Brodic, Darko; Milivojevic, Dragan R.; Milivojevic, Zoran N.

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures. PMID:22164106

  7. An approach to a comprehensive test framework for analysis and evaluation of text line segmentation algorithms.

    PubMed

    Brodic, Darko; Milivojevic, Dragan R; Milivojevic, Zoran N

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.

  8. Reconstruction de defauts a partir de donnees issues de capteurs a courants de foucault avec modele direct differentiel

    NASA Astrophysics Data System (ADS)

    Trillon, Adrien

    Eddy current tomography can be employed to caracterize flaws in metal plates in steam generators of nuclear power plants. Our goal is to evaluate a map of the relative conductivity that represents the flaw. This nonlinear ill-posed problem is difficult to solve and a forward model is needed. First, we studied existing forward models to chose the one that is the most adapted to our case. Finite difference and finite element methods matched very good to our application. We adapted contrast source inversion (CSI) type methods to the chosen model and a new criterion was proposed. These methods are based on the minimization of the weighted errors of the model equations, coupling and observation. They allow an error on the equations. It appeared that reconstruction quality grows with the decay of the error on the coupling equation. We resorted to augmented Lagrangian techniques to constrain coupling equation and to avoid conditioning problems. In order to overcome the ill-posed character of the problem, prior information was introduced about the shape of the flaw and the values of the relative conductivity. Efficiency of the methods are illustrated with simulated flaws in 2D case.

  9. Rate, causes and reporting of medication errors in Jordan: nurses' perspectives.

    PubMed

    Mrayyan, Majd T; Shishani, Kawkab; Al-Faouri, Ibrahim

    2007-09-01

    The aim of the study was to describe Jordanian nurses' perceptions about various issues related to medication errors. This is the first nursing study about medication errors in Jordan. This was a descriptive study. A convenient sample of 799 nurses from 24 hospitals was obtained. Descriptive and inferential statistics were used for data analysis. Over the course of their nursing career, the average number of recalled committed medication errors per nurse was 2.2. Using incident reports, the rate of medication errors reported to nurse managers was 42.1%. Medication errors occurred mainly when medication labels/packaging were of poor quality or damaged. Nurses failed to report medication errors because they were afraid that they might be subjected to disciplinary actions or even lose their jobs. In the stepwise regression model, gender was the only predictor of medication errors in Jordan. Strategies to reduce or eliminate medication errors are required.

  10. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  11. Error analysis of high-rate GNSS precise point positioning for seismic wave measurement

    NASA Astrophysics Data System (ADS)

    Shu, Yuanming; Shi, Yun; Xu, Peiliang; Niu, Xiaoji; Liu, Jingnan

    2017-06-01

    High-rate GNSS precise point positioning (PPP) has been playing a more and more important role in providing precise positioning information in fast time-varying environments. Although kinematic PPP is commonly known to have a precision of a few centimeters, the precision of high-rate PPP within a short period of time has been reported recently with experiments to reach a few millimeters in the horizontal components and sub-centimeters in the vertical component to measure seismic motion, which is several times better than the conventional kinematic PPP practice. To fully understand the mechanism of mystified excellent performance of high-rate PPP within a short period of time, we have carried out a theoretical error analysis of PPP and conducted the corresponding simulations within a short period of time. The theoretical analysis has clearly indicated that the high-rate PPP errors consist of two types: the residual systematic errors at the starting epoch, which affect high-rate PPP through the change of satellite geometry, and the time-varying systematic errors between the starting epoch and the current epoch. Both the theoretical error analysis and simulated results are fully consistent with and thus have unambiguously confirmed the reported high precision of high-rate PPP, which has been further affirmed here by the real data experiments, indicating that high-rate PPP can indeed achieve the millimeter level of precision in the horizontal components and the sub-centimeter level of precision in the vertical component to measure motion within a short period of time. The simulation results have clearly shown that the random noise of carrier phases and higher order ionospheric errors are two major factors to affect the precision of high-rate PPP within a short period of time. The experiments with real data have also indicated that the precision of PPP solutions can degrade to the cm level in both the horizontal and vertical components, if the geometry of satellites is rather poor with a large DOP value.

  12. Probability of Detection of Genotyping Errors and Mutations as Inheritance Inconsistencies in Nuclear-Family Data

    PubMed Central

    Douglas, Julie A.; Skol, Andrew D.; Boehnke, Michael

    2002-01-01

    Gene-mapping studies routinely rely on checking for Mendelian transmission of marker alleles in a pedigree, as a means of screening for genotyping errors and mutations, with the implicit assumption that, if a pedigree is consistent with Mendel’s laws of inheritance, then there are no genotyping errors. However, the occurrence of inheritance inconsistencies alone is an inadequate measure of the number of genotyping errors, since the rate of occurrence depends on the number and relationships of genotyped pedigree members, the type of errors, and the distribution of marker-allele frequencies. In this article, we calculate the expected probability of detection of a genotyping error or mutation as an inheritance inconsistency in nuclear-family data, as a function of both the number of genotyped parents and offspring and the marker-allele frequency distribution. Through computer simulation, we explore the sensitivity of our analytic calculations to the underlying error model. Under a random-allele–error model, we find that detection rates are 51%–77% for multiallelic markers and 13%–75% for biallelic markers; detection rates are generally lower when the error occurs in a parent than in an offspring, unless a large number of offspring are genotyped. Errors are especially difficult to detect for biallelic markers with equally frequent alleles, even when both parents are genotyped; in this case, the maximum detection rate is 34% for four-person nuclear families. Error detection in families in which parents are not genotyped is limited, even with multiallelic markers. Given these results, we recommend that additional error checking (e.g., on the basis of multipoint analysis) be performed, beyond routine checking for Mendelian consistency. Furthermore, our results permit assessment of the plausibility of an observed number of inheritance inconsistencies for a family, allowing the detection of likely pedigree—rather than genotyping—errors in the early stages of a genome scan. Such early assessments are valuable in either the targeting of families for resampling or discontinued genotyping. PMID:11791214

  13. Partial compensation interferometry for measurement of surface parameter error of high-order aspheric surfaces

    NASA Astrophysics Data System (ADS)

    Hao, Qun; Li, Tengfei; Hu, Yao

    2018-01-01

    Surface parameters are the properties to describe the shape characters of aspheric surface, which mainly include vertex radius of curvature (VROC) and conic constant (CC). The VROC affects the basic properties, such as focal length of an aspheric surface, while the CC is the basis of classification for aspheric surface. The deviations of the two parameters are defined as surface parameter error (SPE). Precisely measuring SPE is critical for manufacturing and aligning aspheric surface. Generally, SPE of aspheric surface is measured directly by curvature fitting on the absolute profile measurement data from contact or non-contact testing. And most interferometry-based methods adopt null compensators or null computer-generated holograms to measure SPE. To our knowledge, there is no effective way to measure SPE of highorder aspheric surface with non-null interferometry. In this paper, based on the theory of slope asphericity and the best compensation distance (BCD) established in our previous work, we propose a SPE measurement method for high-order aspheric surface in partial compensation interferometry (PCI) system. In the procedure, firstly, we establish the system of two element equations by utilizing the SPE-caused BCD change and surface shape change. Then, we can simultaneously obtain the VROC error and CC error in PCI system by solving the equations. Simulations are made to verify the method, and the results show a high relative accuracy.

  14. Adaptive detection of missed text areas in OCR outputs: application to the automatic assessment of OCR quality in mass digitization projects

    NASA Astrophysics Data System (ADS)

    Ben Salah, Ahmed; Ragot, Nicolas; Paquet, Thierry

    2013-01-01

    The French National Library (BnF*) has launched many mass digitization projects in order to give access to its collection. The indexation of digital documents on Gallica (digital library of the BnF) is done through their textual content obtained thanks to service providers that use Optical Character Recognition softwares (OCR). OCR softwares have become increasingly complex systems composed of several subsystems dedicated to the analysis and the recognition of the elements in a page. However, the reliability of these systems is always an issue at stake. Indeed, in some cases, we can find errors in OCR outputs that occur because of an accumulation of several errors at different levels in the OCR process. One of the frequent errors in OCR outputs is the missed text components. The presence of such errors may lead to severe defects in digital libraries. In this paper, we investigate the detection of missed text components to control the OCR results from the collections of the French National Library. Our verification approach uses local information inside the pages based on Radon transform descriptors and Local Binary Patterns descriptors (LBP) coupled with OCR results to control their consistency. The experimental results show that our method detects 84.15% of the missed textual components, by comparing the OCR ALTO files outputs (produced by the service providers) to the images of the document.

  15. The statistical properties and possible causes of polar motion prediction errors

    NASA Astrophysics Data System (ADS)

    Kosek, Wieslaw; Kalarus, Maciej; Wnek, Agnieszka; Zbylut-Gorska, Maria

    2015-08-01

    The pole coordinate data predictions from different prediction contributors of the Earth Orientation Parameters Combination of Prediction Pilot Project (EOPCPPP) were studied to determine the statistical properties of polar motion forecasts by looking at the time series of differences between them and the future IERS pole coordinates data. The mean absolute errors, standard deviations as well as the skewness and kurtosis of these differences were computed together with their error bars as a function of prediction length. The ensemble predictions show a little smaller mean absolute errors or standard deviations however their skewness and kurtosis values are similar as the for predictions from different contributors. The skewness and kurtosis enable to check whether these prediction differences satisfy normal distribution. The kurtosis values diminish with the prediction length which means that the probability distribution of these prediction differences is becoming more platykurtic than letptokurtic. Non zero skewness values result from oscillating character of these differences for particular prediction lengths which can be due to the irregular change of the annual oscillation phase in the joint fluid (atmospheric + ocean + land hydrology) excitation functions. The variations of the annual oscillation phase computed by the combination of the Fourier transform band pass filter and the Hilbert transform from pole coordinates data as well as from pole coordinates model data obtained from fluid excitations are in a good agreement.

  16. Photomask CD and LER characterization using Mueller matrix spectroscopic ellipsometry

    NASA Astrophysics Data System (ADS)

    Heinrich, A.; Dirnstorfer, I.; Bischoff, J.; Meiner, K.; Ketelsen, H.; Richter, U.; Mikolajick, T.

    2014-10-01

    Critical dimension and line edge roughness on photomask arrays are determined with Mueller matrix spectroscopic ellipsometry. Arrays with large sinusoidal perturbations are measured for different azimuth angels and compared with simulations based on rigorous coupled wave analysis. Experiment and simulation show that line edge roughness leads to characteristic changes in the different Mueller matrix elements. The influence of line edge roughness is interpreted as an increase of isotropic character of the sample. The changes in the Mueller matrix elements are very similar when the arrays are statistically perturbed with rms roughness values in the nanometer range suggesting that the results on the sinusoidal test structures are also relevant for "real" mask errors. Critical dimension errors and line edge roughness have similar impact on the SE MM measurement. To distinguish between both deviations, a strategy based on the calculation of sensitivities and correlation coefficients for all Mueller matrix elements is shown. The Mueller matrix elements M13/M31 and M34/M43 are the most suitable elements due to their high sensitivities to critical dimension errors and line edge roughness and, at the same time, to a low correlation coefficient between both influences. From the simulated sensitivities, it is estimated that the measurement accuracy has to be in the order of 0.01 and 0.001 for the detection of 1 nm critical dimension error and 1 nm line edge roughness, respectively.

  17. Assisting Movement Training and Execution With Visual and Haptic Feedback.

    PubMed

    Ewerton, Marco; Rother, David; Weimar, Jakob; Kollegger, Gerrit; Wiemeyer, Josef; Peters, Jan; Maeda, Guilherme

    2018-01-01

    In the practice of motor skills in general, errors in the execution of movements may go unnoticed when a human instructor is not available. In this case, a computer system or robotic device able to detect movement errors and propose corrections would be of great help. This paper addresses the problem of how to detect such execution errors and how to provide feedback to the human to correct his/her motor skill using a general, principled methodology based on imitation learning. The core idea is to compare the observed skill with a probabilistic model learned from expert demonstrations. The intensity of the feedback is regulated by the likelihood of the model given the observed skill. Based on demonstrations, our system can, for example, detect errors in the writing of characters with multiple strokes. Moreover, by using a haptic device, the Haption Virtuose 6D, we demonstrate a method to generate haptic feedback based on a distribution over trajectories, which could be used as an auxiliary means of communication between an instructor and an apprentice. Additionally, given a performance measurement, the haptic device can help the human discover and perform better movements to solve a given task. In this case, the human first tries a few times to solve the task without assistance. Our framework, in turn, uses a reinforcement learning algorithm to compute haptic feedback, which guides the human toward better solutions.

  18. Outlier removal, sum scores, and the inflation of the Type I error rate in independent samples t tests: the power of alternatives and recommendations.

    PubMed

    Bakker, Marjan; Wicherts, Jelte M

    2014-09-01

    In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  19. Does raising type 1 error rate improve power to detect interactions in linear regression models? A simulation study.

    PubMed

    Durand, Casey P

    2013-01-01

    Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model. A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations. In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified. Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.

  20. An Evaluation of Commercial Pedometers for Monitoring Slow Walking Speed Populations.

    PubMed

    Beevi, Femina H A; Miranda, Jorge; Pedersen, Christian F; Wagner, Stefan

    2016-05-01

    Pedometers are considered desirable devices for monitoring physical activity. Two population groups of interest include patients having undergone surgery in the lower extremities or who are otherwise weakened through disease, medical treatment, or surgery procedures, as well as the slow walking senior population. For these population groups, pedometers must be able to perform reliably and accurately at slow walking speeds. The objectives of this study were to evaluate the step count accuracy of three commercially available pedometers, the Yamax (Tokyo, Japan) Digi-Walker(®) SW-200 (YM), the Omron (Kyoto, Japan) HJ-720 (OM), and the Fitbit (San Francisco, CA) Zip (FB), at slow walking speeds, specifically at 1, 2, and 3 km/h, and to raise awareness of the necessity of focusing research on step-counting devices and algorithms for slow walking populations. Fourteen participants 29.93 ±4.93 years of age were requested to walk on a treadmill at the three specified speeds, in four trials of 100 steps each. The devices were worn by the participants on the waist belt. The pedometer counts were recorded, and the error percentage was calculated. The error rate of all three evaluated pedometers decreased with the increase of speed: at 1 km/h the error rates varied from 87.11% (YM) to 95.98% (FB), at 2 km/h the error rates varied from 17.27% (FB) to 46.46% (YM), and at 3 km/h the error rates varied from 22.46% (YM) to a slight overcount of 0.70% (FB). It was observed that all the evaluated devices have high error rates at 1 km/h and mixed error rates at 2 km/h, and at 3 km/h the error rates are the smallest of the three assessed speeds, with the OM and the FB having a slight overcount. These results show that research on pedometers' software and hardware should focus more on accurate step detection at slow walking speeds.

  1. Children's Reading Preferences in Fiction.

    ERIC Educational Resources Information Center

    Norris, Rob; And Others

    1979-01-01

    Children, ages 8-9 and 11-12, rated fiction they had just read. Boys preferred supernatural, adventure and mystery stories while girls rated fairy and pony stories highly. Both preferred characters of their own sex. Sex differences were smaller in the 11-12 age group (f=fiche number). (CP)

  2. Impact of Measurement Error on Synchrophasor Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include themore » possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.« less

  3. Implementation and application of an interactive user-friendly validation software for RADIANCE

    NASA Astrophysics Data System (ADS)

    Sundaram, Anand; Boonn, William W.; Kim, Woojin; Cook, Tessa S.

    2012-02-01

    RADIANCE extracts CT dose parameters from dose sheets using optical character recognition and stores the data in a relational database. To facilitate validation of RADIANCE's performance, a simple user interface was initially implemented and about 300 records were evaluated. Here, we extend this interface to achieve a wider variety of functions and perform a larger-scale validation. The validator uses some data from the RADIANCE database to prepopulate quality-testing fields, such as correspondence between calculated and reported total dose-length product. The interface also displays relevant parameters from the DICOM headers. A total of 5,098 dose sheets were used to test the performance accuracy of RADIANCE in dose data extraction. Several search criteria were implemented. All records were searchable by accession number, study date, or dose parameters beyond chosen thresholds. Validated records were searchable according to additional criteria from validation inputs. An error rate of 0.303% was demonstrated in the validation. Dose monitoring is increasingly important and RADIANCE provides an open-source solution with a high level of accuracy. The RADIANCE validator has been updated to enable users to test the integrity of their installation and verify that their dose monitoring is accurate and effective.

  4. Characterization of Acoustic Emission Parameters During Testing of Metal Liner Reinforced with Fully Resin Impregnated CNG Cylinder

    NASA Astrophysics Data System (ADS)

    Kenok, R.; Jomdecha, C.; Jirarungsatian, C.

    The aim of this paper is to study the acoustic emission (AE) parameters obtained from CNG cylinders during pressurization. AE from flaw propagation, material integrity, and pressuring of cylinder was the main objective for characterization. CNG cylinders of ISO 11439, resin fully wrapped type and metal liner type, were employed to test by hydrostatic stressing. The pressure was step increased until 1.1 time of operating pressure. Two AE sensors, resonance frequency of 150 kHz, were mounted on the cylinder wall to detect the AE throughout the testing. From the experiment results, AE can be detected from pressuring rate, material integrity, and flaw propagation from the cylinder wall. AE parameters including Amplitude, Count, Energy (MARSE), Duration and Rise time were analyzed to distinguish the AE data. The results show that the AE of flaw propagation was different in character from that of pressurization. Especially, AE detected from flaws of resin wrapped and metal liner was significantly different. To locate the flaw position, both the AE sensors can be accurately used to locate the flaw propagation in a linear pattern. The error was less than ±5 cm.

  5. Morphometric and molecular characterization of fungus Pestalotiopsis using nuclear ribosomal DNA analysis.

    PubMed

    Gehlot, Praveen; Singh, S K; Pathak, Rakesh

    2012-09-01

    Taxonomy of the fungus Pestalotiopsis based on morphological characters has been equivocal. Molecular characterization often Pestalotiopsis species was done based on nuclear ribosomal DNA internal transcribed spacer (ITS) amplifications. Results of the analyses showed that species of genus Pestalotiopsis are monophyletic. We report ITS length variations, single nucleotide polymorphisms (SNPs) and insertions/ deletions (INDELS) among ten species of Pestalotiopsis that did not cause any phylogenetic error at either genus or species designation levels. New gene sequences have been assigned (Gen Accession numbers from HM 190146 to HM 190155) by the National Centre for Biotechnology Information, USA.

  6. Advances in Anisotropic Materials for Optical Switching

    DTIC Science & Technology

    2010-09-16

    polarization oflight, Figure 2(b)u.14,18.19, \\,/::1: 1 11 1" ~~§~ I ) \\111~~l\\ I .~~t~· . r illii I ’pQ~~rJ 1’/\\ ~~ \\, ,\\1 ,\\ ~ \\\\ .~: (a) (b) Fig. 2...CW and pulsed beams (materials containi ng two benzcne rings with donor·acccptor, or push·pull , ff-nC] conjugation, characte ri zed by r - I s...ioll, characterized by r - 0.00 I s) tError! Dookm.rk not ddiDtd.] . Azo LCs that combine the high photosensitivity of azobenzenes with the high

  7. 21st Century Skin Findings Response.

    PubMed

    Reese, V; Croley, J A; Ryan, M P; Wagner, R F

    2018-04-28

    We read of interest the letter by Ishida et al, "Skin Findings of 21 st Century Movie Characters." 1 The authors conclude that the prevalence of movie villains with cutaneous lesions in cinema since 2000 is lower than films released in the 20 th century. Reviewing their examples, we note some frank errors in the data presented. Immortan Joe from "Mad Max: Fury Road" is listed as having a "lip deficit." This is due to trauma and under his breathing apparatus, there is marked scarring. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  8. Observations of the Mf ocean tide from Geosat altimetry

    NASA Technical Reports Server (NTRS)

    Cartwright, David E.; Ray, Richard D.

    1990-01-01

    Zonal averages of the 13.66-day Mf tide are derived from one year of Geosat altimetry records. The orbit errors are reduced by 1/revolution corrections taken over long (several day) arcs. The short-period tides are removed using a model previously derived from the same data. The Mf zonal averages indicate definite nonequilibrium character at nearly all latitudes. The imaginary admittances indicate a Q of at least 8; such a value is consistent with a simplified theory of coupled gravitational and vorticity modes and suggests a value for Proudman's 'friction period' about 123 days.

  9. Design and Application of a Circuit for Measuring Frequency and Duty Cycle of Stimulated Bioelectrical Signal

    NASA Astrophysics Data System (ADS)

    Tang, Li-Ming; Chang, Ben-Kang; Liu, Tie-Bing; Wu, Min; Ling, Gang

    2002-12-01

    To design a new type of circuit for measuring frequency & duty cycle of stimulated bioelectrical signal for the project of 'the map of neuron-threshold in human brain and its clinical application'. This circuit was designed according to the character of stimulated bioelectrical signals. It was tested and improved and then used in the neuron -threshold stimulator. The circuit was found to be very accurate for measuring frequency and the error for measuring duty cycle was below 0.2%. This circuit is well-designed, simple, easy to use, and can be applied in many systems.

  10. Refractive errors in medical students in Singapore.

    PubMed

    Woo, W W; Lim, K A; Yang, H; Lim, X Y; Liew, F; Lee, Y S; Saw, S M

    2004-10-01

    Refractive errors are becoming more of a problem in many societies, with prevalence rates of myopia in many Asian urban countries reaching epidemic proportions. This study aims to determine the prevalence rates of various refractive errors in Singapore medical students. 157 second year medical students (aged 19-23 years) in Singapore were examined. Refractive error measurements were determined using a stand-alone autorefractor. Additional demographical data was obtained via questionnaires filled in by the students. The prevalence rate of myopia in Singapore medical students was 89.8 percent (Spherical equivalence (SE) at least -0.50 D). Hyperopia was present in 1.3 percent (SE more than +0.50 D) of the participants and the overall astigmatism prevalence rate was 82.2 percent (Cylinder at least 0.50 D). Prevalence rates of myopia and astigmatism in second year Singapore medical students are one of the highest in the world.

  11. Social deviance activates the brain's error-monitoring system.

    PubMed

    Kim, Bo-Rin; Liss, Alison; Rao, Monica; Singer, Zachary; Compton, Rebecca J

    2012-03-01

    Social psychologists have long noted the tendency for human behavior to conform to social group norms. This study examined whether feedback indicating that participants had deviated from group norms would elicit a neural signal previously shown to be elicited by errors and monetary losses. While electroencephalograms were recorded, participants (N = 30) rated the attractiveness of 120 faces and received feedback giving the purported average rating made by a group of peers. The feedback was manipulated so that group ratings either were the same as a participant's rating or deviated by 1, 2, or 3 points. Feedback indicating deviance from the group norm elicited a feedback-related negativity, a brainwave signal known to be elicited by objective performance errors and losses. The results imply that the brain treats deviance from social norms as an error.

  12. Cryptographic robustness of a quantum cryptography system using phase-time coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molotkov, S. N.

    2008-01-15

    A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In themore » absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.« less

  13. Automatic learning rate adjustment for self-supervising autonomous robot control

    NASA Technical Reports Server (NTRS)

    Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.

    1992-01-01

    Described is an application in which an Artificial Neural Network (ANN) controls the positioning of a robot arm with five degrees of freedom by using visual feedback provided by two cameras. This application and the specific ANN model, local liner maps, are based on the work of Ritter, Martinetz, and Schulten. We extended their approach by generating a filtered, average positioning error from the continuous camera feedback and by coupling the learning rate to this error. When the network learns to position the arm, the positioning error decreases and so does the learning rate until the system stabilizes at a minimum error and learning rate. This abolishes the need for a predetermined cooling schedule. The automatic cooling procedure results in a closed loop control with no distinction between a learning phase and a production phase. If the positioning error suddenly starts to increase due to an internal failure such as a broken joint, or an environmental change such as a camera moving, the learning rate increases accordingly. Thus, learning is automatically activated and the network adapts to the new condition after which the error decreases again and learning is 'shut off'. The automatic cooling is therefore a prerequisite for the autonomy and the fault tolerance of the system.

  14. The effects of castration, preslaughter stress and zeranol implants on beef: Part 2-Cooking properties and flavor of loin steaks from bovine males.

    PubMed

    Jeremiah, L E; Newman, J A; Tong, A K; Gibson, L L

    1988-01-01

    A total of 144 male crossbred calves were allocated to four castration or implant treatments (unimplanted bulls; unimplanted steers; bulls implanted with zeranol at 100 days of age and reimplanted at intervals of 69, 93 and 56 days thereafter; bulls implanted with zeranol at 168 days of age and reimplanted at intervals of 93 and 56 days thereafter) and two preslaughter shipping treatments (minimum preslaughter stress, with cattle shipped and slaughtered within 4 h of leaving the feedlot pen; normal preslaughter stress, with cattle mixed, trucked 160 km, and slaughtered up to 24 h after leaving the feedlot pen). These cattle were slaughtered and striploin steaks were removed after 6 days of post-mortem aging. Evaluations of these steaks were then conducted using both an experienced laboratory taste panel and a highly trained professional flavor profile panel. Results indicated that: (1) steaks from bulls had higher cooking losses than their counterparts from steers, when minimum preslaughter stress was applied; and required longer cooking times under both preslaughter handling treatments; (2) steaks from unimplanted bulls had greater cooking losses and required longer cooking times than their counterparts from implanted bulls under normal preslaughter stress, but not under minimum preslaughter stress; (3) higher proportions of bull steaks than steer steaks contained inappropriate flavor character notes, under both minimum and normal levels of preslaughter stress; (4) both castration and preslaughter handling affected the intensity and order of appearance of specific flavor character notes; (5) the level of preslaughter stress significantly influenced the detection of specific flavor character notes in steaks from both bulls and steers; (6) steaks from steers under minimum preslaughter stress were rated significantly higher in flavor amplitude than their counterparts from bulls when under normal preslaughter stress, and steaks from steers under minimum preslaughter stress received higher flavor desirability scores than steaks from bulls under both minimum and normal preslaughter stress; (7) zeranol implants influenced the appearance and the order of appearance of specific flavor character notes under both minimum and normal levels of preslaughter stress; (8) both zeranol implants and the length of time animals were implanted appeared to increase the intensity of certain inappropriate character notes, and to decrease the intensity of certain appropriate character notes; (9) steaks from implanted bulls received lower flavor amplitude ratings than their counterparts from unimplanted bulls under normal preslaughter stress, but not under minimum preslaughter stress; (10) the level of preslaughter stress influenced both the appearance and order of appearance of specific flavor character notes in both implanted and unimplanted bull steaks; (11) the intensities of certain flavor character notes were influenced by differences in the level of preslaughter stress in both implanted and unimplanted bull steaks, and higher levels usually resulted in inappropriate character notes being more intense; (12) steaks from bulls in both implant groups received lower flavor amplitude ratings when normal preslaughter stress was applied, clearly indicating the deleterious effect of the combination of zeranol implants and normal preslaughter stress on bull beef flavor; and (13) the deleterious effect of the combination of zeranol implants and normal preslaughter stress on bull beef flavor could not be explaind on the basis of greater production of 'dark cutting' beef. Copyright © 1988. Published by Elsevier Ltd.

  15. An Automated Method to Generate e-Learning Quizzes from Online Language Learner Writing

    ERIC Educational Resources Information Center

    Flanagan, Brendan; Yin, Chengjiu; Hirokawa, Sachio; Hashimoto, Kiyota; Tabata, Yoshiyuki

    2013-01-01

    In this paper, the entries of Lang-8, which is a Social Networking Site (SNS) site for learning and practicing foreign languages, were analyzed and found to contain similar rates of errors for most error categories reported in previous research. These similarly rated errors were then processed using an algorithm to determine corrections suggested…

  16. 45 CFR 286.205 - How will we determine if a Tribe fails to meet the minimum work participation rate(s)?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., financial records, and automated data systems; (ii) The data are free from computational errors and are... records, financial records, and automated data systems; (ii) The data are free from computational errors... records, and automated data systems; (ii) The data are free from computational errors and are internally...

  17. DNA Barcoding through Quaternary LDPC Codes

    PubMed Central

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6. PMID:26492348

  18. DNA Barcoding through Quaternary LDPC Codes.

    PubMed

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).

  19. Human operator response to error-likely situations in complex engineering systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1988-01-01

    The causes of human error in complex systems are examined. First, a conceptual framework is provided in which two broad categories of error are discussed: errors of action, or slips, and errors of intention, or mistakes. Conditions in which slips and mistakes might be expected to occur are identified, based on existing theories of human error. Regarding the role of workload, it is hypothesized that workload may act as a catalyst for error. Two experiments are presented in which humans' response to error-likely situations were examined. Subjects controlled PLANT under a variety of conditions and periodically provided subjective ratings of mental effort. A complex pattern of results was obtained, which was not consistent with predictions. Generally, the results of this research indicate that: (1) humans respond to conditions in which errors might be expected by attempting to reduce the possibility of error, and (2) adaptation to conditions is a potent influence on human behavior in discretionary situations. Subjects' explanations for changes in effort ratings are also explored.

  20. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  1. Mapping DNA polymerase errors by single-molecule sequencing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, David F.; Lu, Jenny; Chang, Seungwoo

    Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less

  2. Mapping DNA polymerase errors by single-molecule sequencing

    DOE PAGES

    Lee, David F.; Lu, Jenny; Chang, Seungwoo; ...

    2016-05-16

    Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less

  3. Paediatric in-patient prescribing errors in Malaysia: a cross-sectional multicentre study.

    PubMed

    Khoo, Teik Beng; Tan, Jing Wen; Ng, Hoong Phak; Choo, Chong Ming; Bt Abdul Shukor, Intan Nor Chahaya; Teh, Siao Hean

    2017-06-01

    Background There is a lack of large comprehensive studies in developing countries on paediatric in-patient prescribing errors in different settings. Objectives To determine the characteristics of in-patient prescribing errors among paediatric patients. Setting General paediatric wards, neonatal intensive care units and paediatric intensive care units in government hospitals in Malaysia. Methods This is a cross-sectional multicentre study involving 17 participating hospitals. Drug charts were reviewed in each ward to identify the prescribing errors. All prescribing errors identified were further assessed for their potential clinical consequences, likely causes and contributing factors. Main outcome measures Incidence, types, potential clinical consequences, causes and contributing factors of the prescribing errors. Results The overall prescribing error rate was 9.2% out of 17,889 prescribed medications. There was no significant difference in the prescribing error rates between different types of hospitals or wards. The use of electronic prescribing had a higher prescribing error rate than manual prescribing (16.9 vs 8.2%, p < 0.05). Twenty eight (1.7%) prescribing errors were deemed to have serious potential clinical consequences and 2 (0.1%) were judged to be potentially fatal. Most of the errors were attributed to human factors, i.e. performance or knowledge deficit. The most common contributing factors were due to lack of supervision or of knowledge. Conclusions Although electronic prescribing may potentially improve safety, it may conversely cause prescribing errors due to suboptimal interfaces and cumbersome work processes. Junior doctors need specific training in paediatric prescribing and close supervision to reduce prescribing errors in paediatric in-patients.

  4. The association between semantic dementia and surface dyslexia in Japanese.

    PubMed

    Fushimi, Takao; Komori, Kenjiro; Ikeda, Manabu; Lambon Ralph, Matthew A; Patterson, Karalyn

    2009-03-01

    One theory about reading suggests that producing the correct pronunciations of written words, particularly less familiar words with an atypical spelling-sound relationship, relies in part on knowledge of the word's meaning. This hypothesis has been supported by reports of surface dyslexia in large case-series studies of English-speaking/reading patients with semantic dementia (SD), but would have increased credibility if it applied to other languages and writing systems as well. The hypothesis predicts that, of the two systems used to write Japanese, SD patients should be unimpaired at oral reading of kana because of its invariant relationship between orthography and phonology. By contrast, oral reading of kanji should be impaired in a graded fashion depending on the consistency characteristics of the kanji target words, with worst performance on words whose component characters take 'minority' (atypical) pronunciations, especially if the words are of lower frequency. Errors in kanji reading should primarily reflect assignment of more typical readings to the component characters in these atypical words. In the largest-ever-reported case series of Japanese patients with semantic dementia, we tested and confirmed this hypothesis.

  5. Offline handwritten word recognition using MQDF-HMMs

    NASA Astrophysics Data System (ADS)

    Ramachandrula, Sitaram; Hambarde, Mangesh; Patial, Ajay; Sahoo, Dushyant; Kochar, Shaivi

    2015-01-01

    We propose an improved HMM formulation for offline handwriting recognition (HWR). The main contribution of this work is using modified quadratic discriminant function (MQDF) [1] within HMM framework. In an MQDF-HMM the state observation likelihood is calculated by a weighted combination of MQDF likelihoods of individual Gaussians of GMM (Gaussian Mixture Model). The quadratic discriminant function (QDF) of a multivariate Gaussian can be rewritten by avoiding the inverse of covariance matrix by using the Eigen values and Eigen vectors of it. The MQDF is derived from QDF by substituting few of badly estimated lower-most Eigen values by an appropriate constant. The estimation errors of non-dominant Eigen vectors and Eigen values of covariance matrix for which the training data is insufficient can be controlled by this approach. MQDF has been successfully shown to improve the character recognition performance [1]. The usage of MQDF in HMM improves the computation, storage and modeling power of HMM when there is limited training data. We have got encouraging results on offline handwritten character (NIST database) and word recognition in English using MQDF HMMs.

  6. Going nuts: Measuring free-fall acceleration by analyzing the sound of falling metal pieces

    NASA Astrophysics Data System (ADS)

    Kuhn, Jochen; Vogt, Patrik; Theilmann, Florian

    2016-03-01

    Galilei presented the kinematics of a one-dimensional accelerated motion with ease and in terms of elegant geometry. Moreover, he believed, "Philosophy [i.e. physics] is written in this grand book—I mean the universe—which stands continually open to our gaze, but it cannot be understood unless one first learns to comprehend the language and interpret the characters in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometrical figures, without which it is humanly impossible to understand a single word of it." In classroom practice, however, it can be difficult to reveal this mathematical heart of nature; free fall and other accelerated motions often get obscured by friction or other sources of errors. In this paper, we introduce a method of analyzing free-fall motion indirectly by evaluating the noise of freely falling metal pieces. The method connects a deeper understanding of the mathematical structure of accelerated motion with the possibility to derive a numerical value for the free-fall acceleration g.

  7. The assessment of cognitive errors using an observer-rated method.

    PubMed

    Drapeau, Martin

    2014-01-01

    Cognitive Errors (CEs) are a key construct in cognitive behavioral therapy (CBT). Integral to CBT is that individuals with depression process information in an overly negative or biased way, and that this bias is reflected in specific depressotypic CEs which are distinct from normal information processing. Despite the importance of this construct in CBT theory, practice, and research, few methods are available to researchers and clinicians to reliably identify CEs as they occur. In this paper, the author presents a rating system, the Cognitive Error Rating Scale, which can be used by trained observers to identify and assess the cognitive errors of patients or research participants in vivo, i.e., as they are used or reported by the patients or participants. The method is described, including some of the more important rating conventions to be considered when using the method. This paper also describes the 15 cognitive errors assessed, and the different summary scores, including valence of the CEs, that can be derived from the method.

  8. Cooperative MIMO communication at wireless sensor network: an error correcting code approach.

    PubMed

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.

  9. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    PubMed Central

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  10. Parental Cognitive Errors Mediate Parental Psychopathology and Ratings of Child Inattention.

    PubMed

    Haack, Lauren M; Jiang, Yuan; Delucchi, Kevin; Kaiser, Nina; McBurnett, Keith; Hinshaw, Stephen; Pfiffner, Linda

    2017-09-01

    We investigate the Depression-Distortion Hypothesis in a sample of 199 school-aged children with ADHD-Predominantly Inattentive presentation (ADHD-I) by examining relations and cross-sectional mediational pathways between parental characteristics (i.e., levels of parental depressive and ADHD symptoms) and parental ratings of child problem behavior (inattention, sluggish cognitive tempo, and functional impairment) via parental cognitive errors. Results demonstrated a positive association between parental factors and parental ratings of inattention, as well as a mediational pathway between parental depressive and ADHD symptoms and parental ratings of inattention via parental cognitive errors. Specifically, higher levels of parental depressive and ADHD symptoms predicted higher levels of cognitive errors, which in turn predicted higher parental ratings of inattention. Findings provide evidence for core tenets of the Depression-Distortion Hypothesis, which state that parents with high rates of psychopathology hold negative schemas for their child's behavior and subsequently, report their child's behavior as more severe. © 2016 Family Process Institute.

  11. Decoy-state quantum key distribution with more than three types of photon intensity pulses

    NASA Astrophysics Data System (ADS)

    Chau, H. F.

    2018-04-01

    The decoy-state method closes source security loopholes in quantum key distribution (QKD) using a laser source. In this method, accurate estimates of the detection rates of vacuum and single-photon events plus the error rate of single-photon events are needed to give a good enough lower bound of the secret key rate. Nonetheless, the current estimation method for these detection and error rates, which uses three types of photon intensities, is accurate up to about 1 % relative error. Here I report an experimentally feasible way that greatly improves these estimates and hence increases the one-way key rate of the BB84 QKD protocol with unbiased bases selection by at least 20% on average in realistic settings. The major tricks are the use of more than three types of photon intensities plus the fact that estimating bounds of the above detection and error rates is numerically stable, although these bounds are related to the inversion of a high condition number matrix.

  12. Families as Partners in Hospital Error and Adverse Event Surveillance

    PubMed Central

    Khan, Alisa; Coffey, Maitreya; Litterer, Katherine P.; Baird, Jennifer D.; Furtak, Stephannie L.; Garcia, Briana M.; Ashland, Michele A.; Calaman, Sharon; Kuzma, Nicholas C.; O’Toole, Jennifer K.; Patel, Aarti; Rosenbluth, Glenn; Destino, Lauren A.; Everhart, Jennifer L.; Good, Brian P.; Hepps, Jennifer H.; Dalal, Anuj K.; Lipsitz, Stuart R.; Yoon, Catherine S.; Zigmont, Katherine R.; Srivastava, Rajendu; Starmer, Amy J.; Sectish, Theodore C.; Spector, Nancy D.; West, Daniel C.; Landrigan, Christopher P.

    2017-01-01

    IMPORTANCE Medical errors and adverse events (AEs) are common among hospitalized children. While clinician reports are the foundation of operational hospital safety surveillance and a key component of multifaceted research surveillance, patient and family reports are not routinely gathered. We hypothesized that a novel family-reporting mechanism would improve incident detection. OBJECTIVE To compare error and AE rates (1) gathered systematically with vs without family reporting, (2) reported by families vs clinicians, and (3) reported by families vs hospital incident reports. DESIGN, SETTING, AND PARTICIPANTS We conducted a prospective cohort study including the parents/caregivers of 989 hospitalized patients 17 years and younger (total 3902 patient-days) and their clinicians from December 2014 to July 2015 in 4 US pediatric centers. Clinician abstractors identified potential errors and AEs by reviewing medical records, hospital incident reports, and clinician reports as well as weekly and discharge Family Safety Interviews (FSIs). Two physicians reviewed and independently categorized all incidents, rating severity and preventability (agreement, 68%–90%; κ, 0.50–0.68). Discordant categorizations were reconciled. Rates were generated using Poisson regression estimated via generalized estimating equations to account for repeated measures on the same patient. MAIN OUTCOMES AND MEASURES Error and AE rates. RESULTS Overall, 746 parents/caregivers consented for the study. Of these, 717 completed FSIs. Their median (interquartile range) age was 32.5 (26–40) years; 380 (53.0%) were nonwhite, 566 (78.9%) were female, 603 (84.1%) were English speaking, and 380 (53.0%) had attended college. Of 717 parents/caregivers completing FSIs, 185 (25.8%) reported a total of 255 incidents, which were classified as 132 safety concerns (51.8%), 102 nonsafety-related quality concerns (40.0%), and 21 other concerns (8.2%). These included 22 preventable AEs (8.6%), 17 nonharmful medical errors (6.7%), and 11 nonpreventable AEs (4.3%) on the study unit. In total, 179 errors and 113 AEs were identified from all sources. Family reports included 8 otherwise unidentified AEs, including 7 preventable AEs. Error rates with family reporting (45.9 per 1000 patient-days) were 1.2-fold (95%CI, 1.1–1.2) higher than rates without family reporting (39.7 per 1000 patient-days). Adverse event rates with family reporting (28.7 per 1000 patient-days) were 1.1-fold (95%CI, 1.0–1.2; P=.006) higher than rates without (26.1 per 1000 patient-days). Families and clinicians reported similar rates of errors (10.0 vs 12.8 per 1000 patient-days; relative rate, 0.8; 95%CI, .5–1.2) and AEs (8.5 vs 6.2 per 1000 patient-days; relative rate, 1.4; 95%CI, 0.8–2.2). Family-reported error rates were 5.0-fold (95%CI, 1.9–13.0) higher and AE rates 2.9-fold (95% CI, 1.2–6.7) higher than hospital incident report rates. CONCLUSIONS AND RELEVANCE Families provide unique information about hospital safety and should be included in hospital safety surveillance in order to facilitate better design and assessment of interventions to improve safety. PMID:28241211

  13. Star tracker error analysis: Roll-to-pitch nonorthogonality

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1979-01-01

    An error analysis is described on an anomaly isolated in the star tracker software line of sight (LOS) rate test. The LOS rate cosine was found to be greater than one in certain cases which implied that either one or both of the star tracker measured end point unit vectors used to compute the LOS rate cosine had lengths greater than unity. The roll/pitch nonorthogonality matrix in the TNB CL module of the IMU software is examined as the source of error.

  14. Failure analysis and modeling of a VAXcluster system

    NASA Technical Reports Server (NTRS)

    Tang, Dong; Iyer, Ravishankar K.; Subramani, Sujatha S.

    1990-01-01

    This paper discusses the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics such as error and failure distributions and hazard rates for both individual machines and for the VAXcluster, reward models were developed to analyze the impact of failures on the system as a whole. The results show that more than 46 percent of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors, but also failures occur in bursts. Approximately 40 percent of all failures occur in bursts and involved multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 vs 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model.

  15. The Importance of Being…Social? Instructor Credibility and the Millennials

    ERIC Educational Resources Information Center

    Gerhardt, Megan W.

    2016-01-01

    Using the framework of generational identity, the current study explores how a range of characteristics impact Millennial perceptions of instructor credibility. Millennial Generation student ratings of the impact of competence, character, and sociability on instructor credibility were compared to faculty ratings of the same characteristics.…

  16. The pressure dependence of oxygen-isotope-exchange rates between solution and apical oxygens on the UO 2(OH) 4 2- ion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harley, Steven J.; Ohlin, C. Andre; Johnson, Rene L.

    2011-04-06

    The pressure dependence of isotope exchange rate was determined for apical oxygen atoms in the UO 2(OH) 4 2-(aq) ion. The results can be interpreted to indicate an associative character of the reaction.

  17. Error monitoring issues for common channel signaling

    NASA Astrophysics Data System (ADS)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  18. English speech sound development in preschool-aged children from bilingual English-Spanish environments.

    PubMed

    Gildersleeve-Neumann, Christina E; Kester, Ellen S; Davis, Barbara L; Peña, Elizabeth D

    2008-07-01

    English speech acquisition by typically developing 3- to 4-year-old children with monolingual English was compared to English speech acquisition by typically developing 3- to 4-year-old children with bilingual English-Spanish backgrounds. We predicted that exposure to Spanish would not affect the English phonetic inventory but would increase error frequency and type in bilingual children. Single-word speech samples were collected from 33 children. Phonetically transcribed samples for the 3 groups (monolingual English children, English-Spanish bilingual children who were predominantly exposed to English, and English-Spanish bilingual children with relatively equal exposure to English and Spanish) were compared at 2 time points and for change over time for phonetic inventory, phoneme accuracy, and error pattern frequencies. Children demonstrated similar phonetic inventories. Some bilingual children produced Spanish phonemes in their English and produced few consonant cluster sequences. Bilingual children with relatively equal exposure to English and Spanish averaged more errors than did bilingual children who were predominantly exposed to English. Both bilingual groups showed higher error rates than English-only children overall, particularly for syllable-level error patterns. All language groups decreased in some error patterns, although the ones that decreased were not always the same across language groups. Some group differences of error patterns and accuracy were significant. Vowel error rates did not differ by language group. Exposure to English and Spanish may result in a higher English error rate in typically developing bilinguals, including the application of Spanish phonological properties to English. Slightly higher error rates are likely typical for bilingual preschool-aged children. Change over time at these time points for all 3 groups was similar, suggesting that all will reach an adult-like system in English with exposure and practice.

  19. Antidepressant and antipsychotic medication errors reported to United States poison control centers.

    PubMed

    Kamboj, Alisha; Spiller, Henry A; Casavant, Marcel J; Chounthirath, Thitphalak; Hodges, Nichole L; Smith, Gary A

    2018-05-08

    To investigate unintentional therapeutic medication errors associated with antidepressant and antipsychotic medications in the United States and expand current knowledge on the types of errors commonly associated with these medications. A retrospective analysis of non-health care facility unintentional therapeutic errors associated with antidepressant and antipsychotic medications was conducted using data from the National Poison Data System. From 2000 to 2012, poison control centers received 207 670 calls reporting unintentional therapeutic errors associated with antidepressant or antipsychotic medications that occurred outside of a health care facility, averaging 15 975 errors annually. The rate of antidepressant-related errors increased by 50.6% from 2000 to 2004, decreased by 6.5% from 2004 to 2006, and then increased 13.0% from 2006 to 2012. The rate of errors related to antipsychotic medications increased by 99.7% from 2000 to 2004 and then increased by 8.8% from 2004 to 2012. Overall, 70.1% of reported errors occurred among adults, and 59.3% were among females. The medications most frequently associated with errors were selective serotonin reuptake inhibitors (30.3%), atypical antipsychotics (24.1%), and other types of antidepressants (21.5%). Most medication errors took place when an individual inadvertently took or was given a medication twice (41.0%), inadvertently took someone else's medication (15.6%), or took the wrong medication (15.6%). This study provides a comprehensive overview of non-health care facility unintentional therapeutic errors associated with antidepressant and antipsychotic medications. The frequency and rate of these errors increased significantly from 2000 to 2012. Given that use of these medications is increasing in the US, this study provides important information about the epidemiology of the associated medication errors. Copyright © 2018 John Wiley & Sons, Ltd.

  20. High-Rate Strong-Signal Quantum Cryptography

    NASA Technical Reports Server (NTRS)

    Yuen, Horace P.

    1996-01-01

    Several quantum cryptosystems utilizing different kinds of nonclassical lights, which can accommodate high intensity fields and high data rate, are described. However, they are all sensitive to loss and both the high rate and the strong-signal character rapidly disappear. A squeezed light homodyne detection scheme is proposed which, with present-day technology, leads to more than two orders of magnitude data rate improvement over other current experimental systems for moderate loss.

  1. Hypothesis Testing Using Factor Score Regression

    PubMed Central

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2015-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886

  2. Use of Earth's magnetic field for mitigating gyroscope errors regardless of magnetic perturbation.

    PubMed

    Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard

    2011-01-01

    Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth's magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth's magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment.

  3. Use of Earth’s Magnetic Field for Mitigating Gyroscope Errors Regardless of Magnetic Perturbation

    PubMed Central

    Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard

    2011-01-01

    Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth’s magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth’s magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment. PMID:22247672

  4. Study of Pattern of Change in Handwriting Class Characters with Different Grades of Myopia

    PubMed Central

    Hedge, Shruti Prabhat; Sriram

    2015-01-01

    Introduction Handwriting is a visuo-motor skill highly dependent on visual skills. Any defect in the visual inputs could affect a change in the handwriting. Understanding the variation in handwriting characters caused by visual acuity change can help in identifying learning disabilities in children and also assess the disability in elderly. In our study we try to analyse and catalogue these changes in the handwriting of a person. Materials and Methods The study was conducted among 100 subjects having normal visual acuity. They were asked to perform a set of writing tasks, after which the same tasks were repeated after inducing different grades of myopia. Changes in the handwriting class characters were analysed and compared in all grades of myopia. Results In the study it was found that the letter size, pastiosity, word omissions, inability to stay on line all increase with changes in visual acuity. However these finding are not proportional to the grade of myopia. Conclusion From the findings of the study it can be concluded that myopia significantly influences the handwriting and any change in visual acuity would induce corresponding changes in handwriting. There is increase in letter size, pastiosity where as the ability to stay on line and space between the lines decrease in different grades of myopia. The changes are not linear and cannot be used to predict the grade of myopia but can be used as parameters suggestive of refractive error. PMID:26816917

  5. Prediction of pilot reserve attention capacity during air-to-air target tracking

    NASA Technical Reports Server (NTRS)

    Onstott, E. D.; Faulkner, W. H.

    1977-01-01

    Reserve attention capacity of a pilot was calculated using a pilot model that allocates exclusive model attention according to the ranking of task urgency functions whose variables are tracking error and error rate. The modeled task consisted of tracking a maneuvering target aircraft both vertically and horizontally, and when possible, performing a diverting side task which was simulated by the precise positioning of an electrical stylus and modeled as a task of constant urgency in the attention allocation algorithm. The urgency of the single loop vertical task is simply the magnitude of the vertical tracking error, while the multiloop horizontal task requires a nonlinear urgency measure of error and error rate terms. Comparison of model results with flight simulation data verified the computed model statistics of tracking error of both axes, lateral and longitudinal stick amplitude and rate, and side task episodes. Full data for the simulation tracking statistics as well as the explicit equations and structure of the urgency function multiaxis pilot model are presented.

  6. The Effects of Non-Normality on Type III Error for Comparing Independent Means

    ERIC Educational Resources Information Center

    Mendes, Mehmet

    2007-01-01

    The major objective of this study was to investigate the effects of non-normality on Type III error rates for ANOVA F its three commonly recommended parametric counterparts namely Welch, Brown-Forsythe, and Alexander-Govern test. Therefore these tests were compared in terms of Type III error rates across the variety of population distributions,…

  7. RD Optimized, Adaptive, Error-Resilient Transmission of MJPEG2000-Coded Video over Multiple Time-Varying Channels

    NASA Astrophysics Data System (ADS)

    Bezan, Scott; Shirani, Shahram

    2006-12-01

    To reliably transmit video over error-prone channels, the data should be both source and channel coded. When multiple channels are available for transmission, the problem extends to that of partitioning the data across these channels. The condition of transmission channels, however, varies with time. Therefore, the error protection added to the data at one instant of time may not be optimal at the next. In this paper, we propose a method for adaptively adding error correction code in a rate-distortion (RD) optimized manner using rate-compatible punctured convolutional codes to an MJPEG2000 constant rate-coded frame of video. We perform an analysis on the rate-distortion tradeoff of each of the coding units (tiles and packets) in each frame and adapt the error correction code assigned to the unit taking into account the bandwidth and error characteristics of the channels. This method is applied to both single and multiple time-varying channel environments. We compare our method with a basic protection method in which data is either not transmitted, transmitted with no protection, or transmitted with a fixed amount of protection. Simulation results show promising performance for our proposed method.

  8. Errors in fluid therapy in medical wards.

    PubMed

    Mousavi, Maryam; Khalili, Hossein; Dashti-Khavidaki, Simin

    2012-04-01

    Intravenous fluid therapy remains an essential part of patients' care during hospitalization. There are only few studies that focused on fluid therapy in the hospitalized patients, and there is not any consensus statement about fluid therapy in patients who are hospitalized in medical wards. The aim of the present study was to assess intravenous fluid therapy status and related errors in the patients during the course of hospitalization in the infectious diseases wards of a referral teaching hospital. This study was conducted in the infectious diseases wards of Imam Khomeini Complex Hospital, Tehran, Iran. During a retrospective study, data related to intravenous fluid therapy were collected by two clinical pharmacists of infectious diseases from 2008 to 2010. Intravenous fluid therapy information including indication, type, volume and rate of fluid administration was recorded for each patient. An internal protocol for intravenous fluid therapy was designed based on literature review and available recommendations. The data related to patients' fluid therapy were compared with this protocol. The fluid therapy was considered appropriate if it was compatible with the protocol regarding indication of intravenous fluid therapy, type, electrolyte content and rate of fluid administration. Any mistake in the selection of fluid type, content, volume and rate of administration was considered as intravenous fluid therapy errors. Five hundred and ninety-six of medication errors were detected during the study period in the patients. Overall rate of fluid therapy errors was 1.3 numbers per patient during hospitalization. Errors in the rate of fluid administration (29.8%), incorrect fluid volume calculation (26.5%) and incorrect type of fluid selection (24.6%) were the most common types of errors. The patients' male sex, old age, baseline renal diseases, diabetes co-morbidity, and hospitalization due to endocarditis, HIV infection and sepsis are predisposing factors for the occurrence of fluid therapy errors in the patients. Our result showed that intravenous fluid therapy errors occurred commonly in the hospitalized patients especially in the medical wards. Improvement in knowledge and attention of health-care workers about these errors are essential for preventing of medication errors in aspect of fluid therapy.

  9. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  10. Quantitative assessment of hit detection and confirmation in single and duplicate high-throughput screenings.

    PubMed

    Wu, Zhijin; Liu, Dongmei; Sui, Yunxia

    2008-02-01

    The process of identifying active targets (hits) in high-throughput screening (HTS) usually involves 2 steps: first, removing or adjusting for systematic variation in the measurement process so that extreme values represent strong biological activity instead of systematic biases such as plate effect or edge effect and, second, choosing a meaningful cutoff on the calculated statistic to declare positive compounds. Both false-positive and false-negative errors are inevitable in this process. Common control or estimation of error rates is often based on an assumption of normal distribution of the noise. The error rates in hit detection, especially false-negative rates, are hard to verify because in most assays, only compounds selected in primary screening are followed up in confirmation experiments. In this article, the authors take advantage of a quantitative HTS experiment in which all compounds are tested 42 times over a wide range of 14 concentrations so true positives can be found through a dose-response curve. Using the activity status defined by dose curve, the authors analyzed the effect of various data-processing procedures on the sensitivity and specificity of hit detection, the control of error rate, and hit confirmation. A new summary score is proposed and demonstrated to perform well in hit detection and useful in confirmation rate estimation. In general, adjusting for positional effects is beneficial, but a robust test can prevent overadjustment. Error rates estimated based on normal assumption do not agree with actual error rates, for the tails of noise distribution deviate from normal distribution. However, false discovery rate based on empirically estimated null distribution is very close to observed false discovery proportion.

  11. Novel Approaches for Phylogenetic Inference from Morphological Data and Total-Evidence Dating in Squamate Reptiles (Lizards, Snakes, and Amphisbaenians).

    PubMed

    Pyron, R Alexander

    2017-01-01

    Here, I combine previously underutilized models and priors to perform more biologically realistic phylogenetic inference from morphological data, with an example from squamate reptiles. When coding morphological characters, it is often possible to denote ordered states with explicit reference to observed or hypothetical ancestral conditions. Using this logic, we can integrate across character-state labels and estimate meaningful rates of forward and backward transitions from plesiomorphy to apomorphy. I refer to this approach as MkA, for “asymmetric.” The MkA model incorporates the biological reality of limited reversal for many phylogenetically informative characters, and significantly increases likelihoods in the empirical data sets. Despite this, the phylogeny of Squamata remains contentious. Total-evidence analyses using combined morphological and molecular data and the MkA approach tend toward recent consensus estimates supporting a nested Iguania. However, support for this topology is not unambiguous across data sets or analyses, and no mechanism has been proposed to explain the widespread incongruence between partitions, or the hidden support for various topologies in those partitions. Furthermore, different morphological data sets produced by different authors contain both different characters and different states for the same or similar characters, resulting in drastically different placements for many important fossil lineages. Effort is needed to standardize ontology for morphology, resolve incongruence, and estimate a robust phylogeny. The MkA approach provides a preliminary avenue for investigating morphological evolution while accounting for temporal evidence and asymmetry in character-state changes.

  12. Benchmark for license plate character segmentation

    NASA Astrophysics Data System (ADS)

    Gonçalves, Gabriel Resende; da Silva, Sirlene Pio Gomes; Menotti, David; Shwartz, William Robson

    2016-09-01

    Automatic license plate recognition (ALPR) has been the focus of many researches in the past years. In general, ALPR is divided into the following problems: detection of on-track vehicles, license plate detection, segmentation of license plate characters, and optical character recognition (OCR). Even though commercial solutions are available for controlled acquisition conditions, e.g., the entrance of a parking lot, ALPR is still an open problem when dealing with data acquired from uncontrolled environments, such as roads and highways when relying only on imaging sensors. Due to the multiple orientations and scales of the license plates captured by the camera, a very challenging task of the ALPR is the license plate character segmentation (LPCS) step, because its effectiveness is required to be (near) optimal to achieve a high recognition rate by the OCR. To tackle the LPCS problem, this work proposes a benchmark composed of a dataset designed to focus specifically on the character segmentation step of the ALPR within an evaluation protocol. Furthermore, we propose the Jaccard-centroid coefficient, an evaluation measure more suitable than the Jaccard coefficient regarding the location of the bounding box within the ground-truth annotation. The dataset is composed of 2000 Brazilian license plates consisting of 14000 alphanumeric symbols and their corresponding bounding box annotations. We also present a straightforward approach to perform LPCS efficiently. Finally, we provide an experimental evaluation for the dataset based on five LPCS approaches and demonstrate the importance of character segmentation for achieving an accurate OCR.

  13. Outpatient Prescribing Errors and the Impact of Computerized Prescribing

    PubMed Central

    Gandhi, Tejal K; Weingart, Saul N; Seger, Andrew C; Borus, Joshua; Burdick, Elisabeth; Poon, Eric G; Leape, Lucian L; Bates, David W

    2005-01-01

    Background Medication errors are common among inpatients and many are preventable with computerized prescribing. Relatively little is known about outpatient prescribing errors or the impact of computerized prescribing in this setting. Objective To assess the rates, types, and severity of outpatient prescribing errors and understand the potential impact of computerized prescribing. Design Prospective cohort study in 4 adult primary care practices in Boston using prescription review, patient survey, and chart review to identify medication errors, potential adverse drug events (ADEs) and preventable ADEs. Participants Outpatients over age 18 who received a prescription from 24 participating physicians. Results We screened 1879 prescriptions from 1202 patients, and completed 661 surveys (response rate 55%). Of the prescriptions, 143 (7.6%; 95% confidence interval (CI) 6.4% to 8.8%) contained a prescribing error. Three errors led to preventable ADEs and 62 (43%; 3% of all prescriptions) had potential for patient injury (potential ADEs); 1 was potentially life-threatening (2%) and 15 were serious (24%). Errors in frequency (n=77, 54%) and dose (n=26, 18%) were common. The rates of medication errors and potential ADEs were not significantly different at basic computerized prescribing sites (4.3% vs 11.0%, P=.31; 2.6% vs 4.0%, P=.16) compared to handwritten sites. Advanced checks (including dose and frequency checking) could have prevented 95% of potential ADEs. Conclusions Prescribing errors occurred in 7.6% of outpatient prescriptions and many could have harmed patients. Basic computerized prescribing systems may not be adequate to reduce errors. More advanced systems with dose and frequency checking are likely needed to prevent potentially harmful errors. PMID:16117752

  14. The Use of Categorized Time-Trend Reporting of Radiation Oncology Incidents: A Proactive Analytical Approach to Improving Quality and Safety Over Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold, Anthony, E-mail: anthony.arnold@sesiahs.health.nsw.gov.a; Delaney, Geoff P.; Cassapi, Lynette

    Purpose: Radiotherapy is a common treatment for cancer patients. Although incidence of error is low, errors can be severe or affect significant numbers of patients. In addition, errors will often not manifest until long periods after treatment. This study describes the development of an incident reporting tool that allows categorical analysis and time trend reporting, covering first 3 years of use. Methods and Materials: A radiotherapy-specific incident analysis system was established. Staff members were encouraged to report actual errors and near-miss events detected at prescription, simulation, planning, or treatment phases of radiotherapy delivery. Trend reporting was reviewed monthly. Results: Reportsmore » were analyzed for the first 3 years of operation (May 2004-2007). A total of 688 reports was received during the study period. The actual error rate was 0.2% per treatment episode. During the study period, the actual error rates reduced significantly from 1% per year to 0.3% per year (p < 0.001), as did the total event report rates (p < 0.0001). There were 3.5 times as many near misses reported compared with actual errors. Conclusions: This system has allowed real-time analysis of events within a radiation oncology department to a reduced error rate through focus on learning and prevention from the near-miss reports. Plans are underway to develop this reporting tool for Australia and New Zealand.« less

  15. Syndromic surveillance for health information system failures: a feasibility study.

    PubMed

    Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico

    2013-05-01

    To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65-0.85. Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures.

  16. Evaluation of unique identifiers used as keys to match identical publications in Pure and SciVal - a case study from health science.

    PubMed

    Madsen, Heidi Holst; Madsen, Dicte; Gauffriau, Marianne

    2016-01-01

    Unique identifiers (UID) are seen as an effective key to match identical publications across databases or identify duplicates in a database. The objective of the present study is to investigate how well UIDs work as match keys in the integration between Pure and SciVal, based on a case with publications from the health sciences. We evaluate the matching process based on information about coverage, precision, and characteristics of publications matched versus not matched with UIDs as the match keys. We analyze this information to detect errors, if any, in the matching process. As an example we also briefly discuss how publication sets formed by using UIDs as the match keys may affect the bibliometric indicators number of publications, number of citations, and the average number of citations per publication.  The objective is addressed in a literature review and a case study. The literature review shows that only a few studies evaluate how well UIDs work as a match key. From the literature we identify four error types: Duplicate digital object identifiers (DOI), incorrect DOIs in reference lists and databases, DOIs not registered by the database where a bibliometric analysis is performed, and erroneous optical or special character recognition. The case study explores the use of UIDs in the integration between the databases Pure and SciVal. Specifically journal publications in English are matched between the two databases. We find all error types except erroneous optical or special character recognition in our publication sets. In particular the duplicate DOIs constitute a problem for the calculation of bibliometric indicators as both keeping the duplicates to improve the reliability of citation counts and deleting them to improve the reliability of publication counts will distort the calculation of average number of citations per publication. The use of UIDs as a match key in citation linking is implemented in many settings, and the availability of UIDs may become critical for the inclusion of a publication or a database in a bibliometric analysis.

  17. Evaluation of unique identifiers used as keys to match identical publications in Pure and SciVal – a case study from health science

    PubMed Central

    Madsen, Heidi Holst; Madsen, Dicte; Gauffriau, Marianne

    2016-01-01

    Unique identifiers (UID) are seen as an effective key to match identical publications across databases or identify duplicates in a database. The objective of the present study is to investigate how well UIDs work as match keys in the integration between Pure and SciVal, based on a case with publications from the health sciences. We evaluate the matching process based on information about coverage, precision, and characteristics of publications matched versus not matched with UIDs as the match keys. We analyze this information to detect errors, if any, in the matching process. As an example we also briefly discuss how publication sets formed by using UIDs as the match keys may affect the bibliometric indicators number of publications, number of citations, and the average number of citations per publication.  The objective is addressed in a literature review and a case study. The literature review shows that only a few studies evaluate how well UIDs work as a match key. From the literature we identify four error types: Duplicate digital object identifiers (DOI), incorrect DOIs in reference lists and databases, DOIs not registered by the database where a bibliometric analysis is performed, and erroneous optical or special character recognition. The case study explores the use of UIDs in the integration between the databases Pure and SciVal. Specifically journal publications in English are matched between the two databases. We find all error types except erroneous optical or special character recognition in our publication sets. In particular the duplicate DOIs constitute a problem for the calculation of bibliometric indicators as both keeping the duplicates to improve the reliability of citation counts and deleting them to improve the reliability of publication counts will distort the calculation of average number of citations per publication. The use of UIDs as a match key in citation linking is implemented in many settings, and the availability of UIDs may become critical for the inclusion of a publication or a database in a bibliometric analysis. PMID:27635223

  18. Aspects of quality insurance in digitizing historical climate data in Germany

    NASA Astrophysics Data System (ADS)

    Mächel, H.; Behrends, J.; Kapala, A.

    2010-09-01

    This contribution presents some of the problems and offers solutions regarding the digitization of historical meteorological data, and explains the need for verification and quality control. For the assessment of changes in climate extremes, long-term and complete observational records with a high temporal resolution are needed. However, in most countries, including Germany, such climate data are rare. Therefore, in 2005, the German Weather Service launched a project to inventory and digitize historical daily climatic records in cooperation with the Meteorological Institute of the University of Bonn. Experience with Optical Character Recognition (OCR) show that it is only of very limited use, as even printed tables (e.g. yearbooks) are not sufficiently recognized (10-20% error). In hand-written records, the recognition rate is about 50%. By comparing daily and monthly values, it is possible to auto-detect errors, but they can not be automatically corrected, since there is often more than one error per month. These erroneous data must then be controlled manually on an individual basis, which is significantly more error-prone than direct manual input. Therefore, both precipitation and climate station data are digitized manually. The time required to digitize one year of precipitation data (including the recording of daily precipitation amount and type, snow amount and type, and weather events such as thunder storms, fog, etc.) is equivalent to about five hours for one year of data. This involves manually typing, reformatting and quality control of the digitized data, as well as creating a digital photograph. For climate stations with three observations per day, the working time is 30-50 hours for one year of data, depending on the number of parameters and the condition of the documents. Several other problems occur when creating the digital records from historical observational data, some of which are listed below. Older records often used varying units and different conventions. For example, a value of 100 was added to the observed temperatures to avoid negative values. Furthermore, because standardization of the observations was very low when measurements began up to 200 years ago, the data often reflect a greater part of non-climatic influences. Varying daily observation times make it difficult to calculate a representative daily value. Even unconventional completed tables cost labor and requires experienced and trained staff. Data homogenization as well as both manual and automatic quality control may address some of these problems.

  19. Renal Drug Dosing

    PubMed Central

    Vogel, Erin A.; Billups, Sarah J.; Herner, Sheryl J.

    2016-01-01

    Summary Objective The purpose of this study was to compare the effectiveness of an outpatient renal dose adjustment alert via a computerized provider order entry (CPOE) clinical decision support system (CDSS) versus a CDSS with alerts made to dispensing pharmacists. Methods This was a retrospective analysis of patients with renal impairment and 30 medications that are contraindicated or require dose-adjustment in such patients. The primary outcome was the rate of renal dosing errors for study medications that were dispensed between August and December 2013, when a pharmacist-based CDSS was in place, versus August through December 2014, when a prescriber-based CDSS was in place. A dosing error was defined as a prescription for one of the study medications dispensed to a patient where the medication was contraindicated or improperly dosed based on the patient’s renal function. The denominator was all prescriptions for the study medications dispensed during each respective study period. Results During the pharmacist- and prescriber-based CDSS study periods, 49,054 and 50,678 prescriptions, respectively, were dispensed for one of the included medications. Of these, 878 (1.8%) and 758 (1.5%) prescriptions were dispensed to patients with renal impairment in the respective study periods. Patients in each group were similar with respect to age, sex, and renal function stage. Overall, the five-month error rate was 0.38%. Error rates were similar between the two groups: 0.36% and 0.40% in the pharmacist- and prescriber-based CDSS, respectively (p=0.523). The medication with the highest error rate was dofetilide (0.51% overall) while the medications with the lowest error rate were dabigatran, fondaparinux, and spironolactone (0.00% overall). Conclusions Prescriber- and pharmacist-based CDSS provided comparable, low rates of potential medication errors. Future studies should be undertaken to examine patient benefits of the prescriber-based CDSS. PMID:27466041

  20. Foam Optics and Mechanics

    NASA Technical Reports Server (NTRS)

    Durian, Douglas J.; Zimmerli, Gregory A.

    2002-01-01

    The Foam Optics and Mechanics (FOAM) project will exploit the microgravity environment to more accurately measure the rheological and optical characteristics of wet aqueous foams. Using both rheology and laser light scattering diagnostics, the goal is to quantify the unusual elastic character of foams in terms of their underlying microscopic structure and dynamics. Of particular interest is determining how the elastic character vanishes, i.e., how the foam 'melts' into a simple viscous liquid, as a function of both increasing liquid content and increasing shear strain rate. The unusual elastic character of foams will be quantified macroscopically by measurement of the shear stress as a function of shear strain rate and of time following a step strain. Such data will be analyzed in terms of a yield stress, shear moduli, and dynamical time scales. Microscopic information about bubble packing and rearrangement dynamics, from which the macroscopic non-Newtonian properties ultimately arise, will be obtained non-invasively by multiple-light scattering: diffuse transmission spectroscopy (DTS) and diffusing wave spectroscopy (DWS). Quantitative trends with materials parameters, most importantly average bubble size and liquid content, will be sought in order to elucidate the fundamental connection between the microscopic structure and dynamics and the macroscopic rheology.

Top