2011-09-01
LAI Location Area Identity MANET Mobile Ad - hoc Network MCC Mobile Country Code MCD Mobile Communications Device MNC Mobile Network Code ...tower or present within a geographical area. These conditions relate directly to users who often operate with mobile ad - hoc networks. These types of...infrastructures. First responders can use these mobile base stations to set up their own networks on the fly, similar to mobile ad - hoc networks
Using hidden Markov models and observed evolution to annotate viral genomes.
McCauley, Stephen; Hein, Jotun
2006-06-01
ssRNA (single stranded) viral genomes are generally constrained in length and utilize overlapping reading frames to maximally exploit the coding potential within the genome length restrictions. This overlapping coding phenomenon leads to complex evolutionary constraints operating on the genome. In regions which code for more than one protein, silent mutations in one reading frame generally have a protein coding effect in another. To maximize coding flexibility in all reading frames, overlapping regions are often compositionally biased towards amino acids which are 6-fold degenerate with respect to the 64 codon alphabet. Previous methodologies have used this fact in an ad hoc manner to look for overlapping genes by motif matching. In this paper differentiated nucleotide compositional patterns in overlapping regions are incorporated into a probabilistic hidden Markov model (HMM) framework which is used to annotate ssRNA viral genomes. This work focuses on single sequence annotation and applies an HMM framework to ssRNA viral annotation. A description of how the HMM is parameterized, whilst annotating within a missing data framework is given. A Phylogenetic HMM (Phylo-HMM) extension, as applied to 14 aligned HIV2 sequences is also presented. This evolutionary extension serves as an illustration of the potential of the Phylo-HMM framework for ssRNA viral genomic annotation. The single sequence annotation procedure (SSA) is applied to 14 different strains of the HIV2 virus. Further results on alternative ssRNA viral genomes are presented to illustrate more generally the performance of the method. The results of the SSA method are encouraging however there is still room for improvement, and since there is overwhelming evidence to indicate that comparative methods can improve coding sequence (CDS) annotation, the SSA method is extended to a Phylo-HMM to incorporate evolutionary information. The Phylo-HMM extension is applied to the same set of 14 HIV2 sequences which are pre-aligned. The performance improvement that results from including the evolutionary information in the analysis is illustrated.
12 CFR 505.2 - Public Reading Room.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 6 2013-01-01 2012-01-01 true Public Reading Room. 505.2 Section 505.2 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY FREEDOM OF INFORMATION ACT § 505.2 Public Reading Room. OTS will make materials available for review on an ad hoc basis when necessary. Contact the FOIA Office, Office of Thrift...
12 CFR 505.2 - Public Reading Room.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 6 2012-01-01 2012-01-01 false Public Reading Room. 505.2 Section 505.2 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY FREEDOM OF INFORMATION ACT § 505.2 Public Reading Room. OTS will make materials available for review on an ad hoc basis when necessary. Contact the FOIA Office, Office of Thrift...
12 CFR 505.2 - Public Reading Room.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Public Reading Room. 505.2 Section 505.2 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY FREEDOM OF INFORMATION ACT § 505.2 Public Reading Room. OTS will make materials available for review on an ad hoc basis when necessary. Contact the FOIA Office, Office of Thrift...
12 CFR 505.2 - Public Reading Room.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 6 2014-01-01 2012-01-01 true Public Reading Room. 505.2 Section 505.2 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY FREEDOM OF INFORMATION ACT § 505.2 Public Reading Room. OTS will make materials available for review on an ad hoc basis when necessary. Contact the FOIA Office, Office of Thrift...
12 CFR 505.2 - Public Reading Room.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 5 2011-01-01 2011-01-01 false Public Reading Room. 505.2 Section 505.2 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY FREEDOM OF INFORMATION ACT § 505.2 Public Reading Room. OTS will make materials available for review on an ad hoc basis when necessary. Contact the FOIA Office, Office of Thrift...
Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †
Murdani, Muhammad Harist; Hong, Bonghee
2018-01-01
In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes (Ad-Hoc) and neighborhood proximity (Top-K). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space. PMID:29587366
Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †.
Murdani, Muhammad Harist; Kwon, Joonho; Choi, Yoon-Ho; Hong, Bonghee
2018-03-24
In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes ( Ad-Hoc ) and neighborhood proximity ( Top-K ). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space.
ERIC Educational Resources Information Center
Daghio, M. Monica; Fattori, Giuseppe; Ciardullo, Anna V.
2006-01-01
Objectives: We compared two non-alternative methods to assess the readability and learning of easy-to-read educational health materials co-written by physicians, educators and citizens. Methods: Data from seven easy-to-read materials were analyzed. Readability formulae, and ad hoc data on readability and learning were also computed. Results: The…
ERIC Educational Resources Information Center
Marcus, Ginger
2010-01-01
Reading is defined as a socio-cultural act negotiated between text and reader, and the act of reading is considered to be a cognitive process that involves knowledge not only of symbols/letters, vocabulary and structure, but also of culture. In other words, in order to understand the intentions of the author and to formulate meaning, the second…
Continuities in Reading Acquisition, Reading Skill, and Reading Disability.
ERIC Educational Resources Information Center
Perfetti, Charles A.
1986-01-01
Learning to read depends on eventual mastery of coding procedures, and even skilled reading depends on coding processes low in cost to processing resources. Reading disability may be understood as a point on an ability continuum or a wide range of coding ability. Instructional goals of word reading skill, including rapid and fluent word…
A genetic scale of reading frame coding.
Michel, Christian J
2014-08-21
The reading frame coding (RFC) of codes (sets) of trinucleotides is a genetic concept which has been largely ignored during the last 50 years. A first objective is the definition of a new and simple statistical parameter PrRFC for analysing the probability (efficiency) of reading frame coding (RFC) of any trinucleotide code. A second objective is to reveal different classes and subclasses of trinucleotide codes involved in reading frame coding: the circular codes of 20 trinucleotides and the bijective genetic codes of 20 trinucleotides coding the 20 amino acids. This approach allows us to propose a genetic scale of reading frame coding which ranges from 1/3 with the random codes (RFC probability identical in the three frames) to 1 with the comma-free circular codes (RFC probability maximal in the reading frame and null in the two shifted frames). This genetic scale shows, in particular, the reading frame coding probabilities of the 12,964,440 circular codes (PrRFC=83.2% in average), the 216 C(3) self-complementary circular codes (PrRFC=84.1% in average) including the code X identified in eukaryotic and prokaryotic genes (PrRFC=81.3%) and the 339,738,624 bijective genetic codes (PrRFC=61.5% in average) including the 52 codes without permuted trinucleotides (PrRFC=66.0% in average). Otherwise, the reading frame coding probabilities of each trinucleotide code coding an amino acid with the universal genetic code are also determined. The four amino acids Gly, Lys, Phe and Pro are coded by codes (not circular) with RFC probabilities equal to 2/3, 1/2, 1/2 and 2/3, respectively. The amino acid Leu is coded by a circular code (not comma-free) with a RFC probability equal to 18/19. The 15 other amino acids are coded by comma-free circular codes, i.e. with RFC probabilities equal to 1. The identification of coding properties in some classes of trinucleotide codes studied here may bring new insights in the origin and evolution of the genetic code. Copyright © 2014 Elsevier Ltd. All rights reserved.
Development and validation of a notational system to study the offensive process in football.
Sarmento, Hugo; Anguera, Teresa; Campaniço, Jorge; Leitão, José
2010-01-01
The most striking change within football development is the application of science to its problems and in particular the use of increasingly sophisticated technology that, supported by scientific data, allows us to establish a "code of reading" the reality of the game. Therefore, this study describes the process of the development and validation of an ad hoc system of categorization, which allows the different methods of offensive game in football and the interaction to be analyzed. Therefore, through an exploratory phase of the study, we identified 10 vertebrate criteria and the respective behaviors observed for each of these criteria. We heard a panel of five experts with the purpose of a content validation. The resulting instrument is characterized by a combination of field formats and systems of categories. The reliability of the instrument was calculated by the intraobserver agreement, and values above 0.95 for all criteria were achieved. Two FC Barcelona games were coded and analyzed, which allowed the detection of various T-patterns. The results show that the instrument serves the purpose for which it was developed and can provide important information for the understanding of game interaction in football.
Ad-Hoc Networks and the Mobile Application Security System (MASS)
2006-01-01
solution to this problem that addresses critical aspects of security in ad-hoc mobile application networks. This approach involves preventing unauthorized...modification of a mobile application , both by other applications and by hosts, and ensuring that mobile code is authentic and authorized. These...capabilities constitute the Mobile Application Security System (MASS). The MASS applies effective, robust security to mobile application -based systems
Effect of Color-Coded Notation on Music Achievement of Elementary Instrumental Students.
ERIC Educational Resources Information Center
Rogers, George L.
1991-01-01
Presents results of a study of color-coded notation to teach music reading to instrumental students. Finds no clear evidence that color-coded notation enhances achievement on performing by memory, sight-reading, or note naming. Suggests that some students depended on the color-coding and were unable to read uncolored notation well. (DK)
Reading Difficulties in Adult Deaf Readers of French: Phonological Codes, Not Guilty!
ERIC Educational Resources Information Center
Belanger, Nathalie N.; Baum, Shari R.; Mayberry, Rachel I.
2012-01-01
Deaf people often achieve low levels of reading skills. The hypothesis that the use of phonological codes is associated with good reading skills in deaf readers is not yet fully supported in the literature. We investigated skilled and less skilled adult deaf readers' use of orthographic and phonological codes in reading. Experiment 1 used a masked…
Self-Configuration and Localization in Ad Hoc Wireless Sensor Networks
2010-08-31
Goddard I. SUMMARY OF CONTRIBUTIONS We explored the error mechanisms of iterative decoding of low-density parity-check ( LDPC ) codes . This work has resulted...important problems in the area of channel coding , as their unpredictable behavior has impeded the deployment of LDPC codes in many real-world applications. We...tree-based decoders of LDPC codes , including the extrinsic tree decoder, and an investigation into their performance and bounding capabilities [5], [6
ERIC Educational Resources Information Center
Swank, Linda K.
1994-01-01
Relationships between phonological coding abilities and reading outcomes have implications for differential diagnosis of language-based reading problems. The theoretical construct of specific phonological coding ability is explained, including phonological encoding, phonological awareness and metaphonology, lexical access, working memory, and…
Development of Components of Reading Skill.
ERIC Educational Resources Information Center
Curtis, Mary E.
1980-01-01
Verbal coding and listening comprehension ability differed among skilled and less skilled readers in second, third, and fifth grades. As verbal coding speed increased, comprehension skill became the more important predictor of reading skill. Apparently, verbal coding processes, which are slow, inhibit other reading processes. (Author/CP)
Bélanger, Nathalie N; Mayberry, Rachel I; Rayner, Keith
2013-01-01
Many deaf individuals do not develop the high-level reading skills that will allow them to fully take part into society. To attempt to explain this widespread difficulty in the deaf population, much research has honed in on the use of phonological codes during reading. The hypothesis that the use of phonological codes is associated with good reading skills in deaf readers, though not well supported, still lingers in the literature. We investigated skilled and less-skilled adult deaf readers' processing of orthographic and phonological codes in parafoveal vision during reading by monitoring their eye movements and using the boundary paradigm. Orthographic preview benefits were found in early measures of reading for skilled hearing, skilled deaf, and less-skilled deaf readers, but only skilled hearing readers processed phonological codes in parafoveal vision. Crucially, skilled and less-skilled deaf readers showed a very similar pattern of preview benefits during reading. These results support the notion that reading difficulties in deaf adults are not linked to their failure to activate phonological codes during reading.
Cohen, Helen S; Gottshall, Kim R; Graziano, Mariella; Malmstrom, Eva-Maj; Sharpe, Margaret H
2009-01-01
The goal of this study was to determine how occupational and physical therapists learn about vestibular rehabilitation therapy, their educational backgrounds, referral patterns, and their ideas about entry-level and advanced continuing education in vestibular rehabilitation therapy. The Barany Society Ad Hoc Committee for Vestibular Rehabilitation Therapy invited therapists around the world to complete an E-mail survey. Participants were either known to committee members or other Barany Society members, known to other participants, identified from their self-listings on the Internet, or volunteered after reading notices published in publications read by therapists. Responses were received from 133 therapists in 19 countries. They had a range of educational backgrounds, practice settings, and referral patterns. Few respondents had had any training about vestibular rehabilitation during their professional entry-level education. Most respondents learned about vestibular rehabilitation from continuing education courses, interactions with their colleagues, and reading. All of them endorsed the concept of developing standards and educating therapists about vestibular anatomy and physiology, vestibular diagnostic testing, vestibular disorders and current intervention strategies. Therefore, the Committee recommends the development of international standards for education and practice in vestibular rehabilitation therapy.
Coding and Comprehension in Skilled Reading and Implications for Reading Instruction.
ERIC Educational Resources Information Center
Perfetti, Charles A.; Lesgold, Alan M.
A view of skilled reading is suggested that emphasizes an intimate connection between coding and comprehension. It is suggested that skilled comprehension depends on a highly refined facility for generating and manipulating language codes, especially at the phonetic/articulatory level. The argument is developed that decoding expertise should be a…
Planned Comparisons as Better Alternatives to ANOVA Omnibus Tests.
ERIC Educational Resources Information Center
Benton, Roberta L.
Analyses of data are presented to illustrate the advantages of using a priori or planned comparisons rather than omnibus analysis of variance (ANOVA) tests followed by post hoc or posteriori testing. The two types of planned comparisons considered are planned orthogonal non-trend coding contrasts and orthogonal polynomial or trend contrast coding.…
PipelineDog: a simple and flexible graphic pipeline construction and maintenance tool.
Zhou, Anbo; Zhang, Yeting; Sun, Yazhou; Xing, Jinchuan
2018-05-01
Analysis pipelines are an essential part of bioinformatics research, and ad hoc pipelines are frequently created by researchers for prototyping and proof-of-concept purposes. However, most existing pipeline management system or workflow engines are too complex for rapid prototyping or learning the pipeline concept. A lightweight, user-friendly and flexible solution is thus desirable. In this study, we developed a new pipeline construction and maintenance tool, PipelineDog. This is a web-based integrated development environment with a modern web graphical user interface. It offers cross-platform compatibility, project management capabilities, code formatting and error checking functions and an online repository. It uses an easy-to-read/write script system that encourages code reuse. With the online repository, it also encourages sharing of pipelines, which enhances analysis reproducibility and accountability. For most users, PipelineDog requires no software installation. Overall, this web application provides a way to rapidly create and easily manage pipelines. PipelineDog web app is freely available at http://web.pipeline.dog. The command line version is available at http://www.npmjs.com/package/pipelinedog and online repository at http://repo.pipeline.dog. ysun@kean.edu or xing@biology.rutgers.edu or ysun@diagnoa.com. Supplementary data are available at Bioinformatics online.
Is phonology bypassed in normal or dyslexic development?
Pennington, B F; Lefly, D L; Van Orden, G C; Bookman, M O; Smith, S D
1987-01-01
A pervasive assumption in most accounts of normal reading and spelling development is that phonological coding is important early in development but is subsequently superseded by faster, orthographic coding which bypasses phonology. We call this assumption, which derives from dual process theory, the developmental bypass hypothesis. The present study tests four specific predictions of the developmental bypass hypothesis by comparing dyslexics and nondyslexics from the same families in a cross-sectional design. The four predictions are: 1) That phonological coding skill develops early in normal readers and soon reaches asymptote, whereas orthographic coding skill has a protracted course of development; 2) that the correlation of adult reading or spelling performance with phonological coding skill is considerably less than the correlation with orthographic coding skill; 3) that dyslexics who are mainly deficient in phonological coding skill should be able to bypass this deficit and eventually close the gap in reading and spelling performance; and 4) that the greatest differences between dyslexics and developmental controls on measures of phonological coding skill should be observed early rather than late in development.None of the four predictions of the developmental bypass hypothesis were upheld. Phonological coding skill continued to develop in nondyslexics until adulthood. It accounted for a substantial (32-53 percent) portion of the variance in reading and spelling performance in adult nondyslexics, whereas orthographic coding skill did not account for a statistically reliable portion of this variance. The dyslexics differed little across age in phonological coding skill, but made linear progress in orthographic coding skill, surpassing spelling-age (SA) controls by adulthood. Nonetheless, they didnot close the gap in reading and spelling performance. Finally, dyslexics were significantly worse than SA (and Reading Age [RA]) controls in phonological coding skill only in adulthood.
Attention in Relation to Coding and Planning in Reading
ERIC Educational Resources Information Center
Mahapatra, Shamita
2015-01-01
A group of 50 skilled readers and a group of 50 less-skilled readers of Grade 5 matched for age and intelligence and selected on the basis of their proficiency in reading comprehension were tested for their competence in word reading and the processes of attention, simultaneous coding, successive coding and planning at three levels, i.e.,…
Text World Theory and real world readers: From literature to life in a Belfast prison
Canning, Patricia
2017-01-01
Cognitive stylistics offers a range of frameworks for understanding (amongst other things) what producers of literary texts ‘do’ with language and how they ‘do’ it. Less prevalent, however, is an understanding of the ways in which these same frameworks offer insights into what readers ‘do’ (and how they ‘do’ it). Text World Theory (Werth, 1999; Gavins, 2007; Whiteley, 2011) has proved useful for understanding how and why readers construct mental representations engendered by the act of reading. However, research on readers’ responses to literature has largely focused on an ‘idealised’ reader or an ‘experimental’ subject-reader often derived from within the academy and conducted using contrived or amended literary fiction. Moreover, the format of traditional book groups (participants read texts privately and discuss them at a later date) as well as online community forums such as Goodreads, means that such studies derive data from post-hoc, rather than real-time textual encounters and discussions. The current study is the first of its kind in analysing real-time reading contexts with real readers during a researcher-led literary project (‘read.live.learn’) in Northern Ireland’s only female prison. In doing so, the study is unique in addressing experimental and post hoc bias. Using Text World Theory, the paper considers the personal and social impact of reader engagement in the talk of the participants. As such, it has three interrelated aims: to argue for the social and personal benefits of reading stylistically rich literature in real-time reading groups; to demonstrate the efficacy of stylistics for understanding how those benefits come about, and to demonstrate the inter-disciplinary value of stylistics, particularly its potential for traversing traditional research parameters. PMID:29278261
Text World Theory and real world readers: From literature to life in a Belfast prison.
Canning, Patricia
2017-05-01
Cognitive stylistics offers a range of frameworks for understanding (amongst other things) what producers of literary texts 'do' with language and how they 'do' it. Less prevalent, however, is an understanding of the ways in which these same frameworks offer insights into what readers 'do' (and how they 'do' it). Text World Theory (Werth, 1999; Gavins, 2007; Whiteley, 2011) has proved useful for understanding how and why readers construct mental representations engendered by the act of reading. However, research on readers' responses to literature has largely focused on an 'idealised' reader or an 'experimental' subject-reader often derived from within the academy and conducted using contrived or amended literary fiction. Moreover, the format of traditional book groups (participants read texts privately and discuss them at a later date) as well as online community forums such as Goodreads, means that such studies derive data from post-hoc, rather than real-time textual encounters and discussions. The current study is the first of its kind in analysing real-time reading contexts with real readers during a researcher-led literary project ('read.live.learn') in Northern Ireland's only female prison. In doing so, the study is unique in addressing experimental and post hoc bias. Using Text World Theory, the paper considers the personal and social impact of reader engagement in the talk of the participants. As such, it has three interrelated aims: to argue for the social and personal benefits of reading stylistically rich literature in real-time reading groups; to demonstrate the efficacy of stylistics for understanding how those benefits come about, and to demonstrate the inter-disciplinary value of stylistics, particularly its potential for traversing traditional research parameters.
Secure Mobile Distributed File System (MDFS)
2011-03-01
dissemination of data. In a mobile ad - hoc network, there are two classes of devices: content generators and content consumers. One im- plementation of...use of infrastructure mode is necessary because current Android implemen- tations do not support Mobile Ad - Hoc network without modification of the...NUMBER (include area code ) Standard Form 298 (Rev. 8–98) Prescribed by ANSI Std. Z39.18 24–3–2011 Master’s Thesis 2009-03-01—2011-03-31 Secure Mobile
Anomaly Detection for Data Reduction in an Unattended Ground Sensor (UGS) Field
2014-09-01
information (shown with solid lines in the diagram). Typically, this would be a mobile ad - hoc network (MANET). The clusters are connected to other nodes...interquartile ranges MANET mobile ad - hoc network OSUS Open Standards for Unattended Sensors TOC tactical operations center UAVs unmanned aerial vehicles...19b. TELEPHONE NUMBER (Include area code ) 301-394-1221 Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 iii Contents List of
Tramontano, A; Macchiato, M F
1986-01-01
An algorithm to determine the probability that a reading frame codifies for a protein is presented. It is based on the results of our previous studies on the thermodynamic characteristics of a translated reading frame. We also develop a prediction procedure to distinguish between coding and non-coding reading frames. The procedure is based on the characteristics of the putative product of the DNA sequence and not on periodicity characteristics of the sequence, so the prediction is not biased by the presence of overlapping translated reading frames or by the presence of translated reading frames on the complementary DNA strand. PMID:3753761
ERIC Educational Resources Information Center
van Staden, Annalene
2013-01-01
The reading skills of many deaf children lag several years behind those of hearing children, and there is a need for identifying reading difficulties and implementing effective reading support strategies in this population. This study embraces a balanced reading approach, and investigates the efficacy of applying multi-sensory coding strategies…
Semantic and Phonological Coding in Poor and Normal Readers.
ERIC Educational Resources Information Center
Vellutino, Frank R.; And Others
1995-01-01
Using poor and normal readers, three studies evaluated semantic coding and phonological coding deficits as explanations for reading disability. It was concluded that semantic coding deficits are unlikely causes of difficulties in poor readers in early stages but accrue with prolonged reading difficulties in older readers. Phonological coding…
Understanding the Requirements for Open Source Software
2009-06-17
GNOME and K Development Environment ( KDE ) for end-user interfaces, the Eclipse and NetBeans interactive development environments for Java-based Web...17 4.1. Informal Post-hoc Assertion of OSS Requirements vs . Requirements Elicitation...18 4.2. Requirements Reading, Sense-making, and Accountability vs . Requirements Analysis
Micromagnetic Code Development of Advanced Magnetic Structures Final Report CRADA No. TC-1561-98
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cerjan, Charles J.; Shi, Xizeng
The specific goals of this project were to: Further develop the previously written micromagnetic code DADIMAG (DOE code release number 980017); Validate the code. The resulting code was expected to be more realistic and useful for simulations of magnetic structures of specific interest to Read-Rite programs. We also planned to further the code for use in internal LLNL programs. This project complemented LLNL CRADA TC-840-94 between LLNL and Read-Rite, which allowed for simulations of the advanced magnetic head development completed under the CRADA. TC-1561-98 was effective concurrently with LLNL non-exclusive copyright license (TL-1552-98) to Read-Rite for DADIMAG Version 2 executablemore » code.« less
De-coding Reading at Work: Workplace Reading Competencies.
ERIC Educational Resources Information Center
Searle, Jean
1998-01-01
Naturalistic observations and interviews with service workers found on-the-job reading was based on knowledge of codes and rules of practice and required problem-solving and metacognitive strategies. Workplace competencies should be considered within their social and cultural context. (SK)
Evidence-Based Reading and Writing Assessment for Dyslexia in Adolescents and Young Adults
Nielsen, Kathleen; Abbott, Robert; Griffin, Whitney; Lott, Joe; Raskind, Wendy; Berninger, Virginia W.
2016-01-01
The same working memory and reading and writing achievement phenotypes (behavioral markers of genetic variants) validated in prior research with younger children and older adults in a multi-generational family genetics study of dyslexia were used to study 81 adolescent and young adults (ages 16 to 25) from that study. Dyslexia is impaired word reading and spelling skills below the population mean and ability to use oral language to express thinking. These working memory predictor measures were given and used to predict reading and writing achievement: Coding (storing and processing) heard and spoken words (phonological coding), read and written words (orthographic coding), base words and affixes (morphological coding), and accumulating words over time (syntax coding); Cross-Code Integration (phonological loop for linking phonological name and orthographic letter codes and orthographic loop for linking orthographic letter codes and finger sequencing codes), and Supervisory Attention (focused and switching attention and self-monitoring during written word finding). Multiple regressions showed that most predictors explained individual difference in at least one reading or writing outcome, but which predictors explained unique variance beyond shared variance depended on outcome. ANOVAs confirmed that research-supported criteria for dyslexia validated for younger children and their parents could be used to diagnose which adolescents and young adults did (n=31) or did not (n=50) meet research criteria for dyslexia. Findings are discussed in reference to the heterogeneity of phenotypes (behavioral markers of genetic variables) and their application to assessment for accommodations and ongoing instruction for adolescents and young adults with dyslexia. PMID:26855554
DCU@TRECMed 2012: Using Ad-Hoc Baselines for Domain-Specific Retrieval
2012-11-01
description to extend the query, for example: Patients with complicated GERD who receive endoscopy will be extended with Gastroesophageal reflux disease ... Diseases and Related Health Problems, version 9) for the patient’s admission or discharge status [1, 5]; treating negation (e.g. negative test results or...codes were mapped to a description of the code, usually a short phrase/sentence. For instance, the ICD9 code 253.5 corresponds to the disease Diabetes
Validation of suicide and self-harm records in the Clinical Practice Research Datalink
Thomas, Kyla H; Davies, Neil; Metcalfe, Chris; Windmeijer, Frank; Martin, Richard M; Gunnell, David
2013-01-01
Aims The UK Clinical Practice Research Datalink (CPRD) is increasingly being used to investigate suicide-related adverse drug reactions. No studies have comprehensively validated the recording of suicide and nonfatal self-harm in the CPRD. We validated general practitioners' recording of these outcomes using linked Office for National Statistics (ONS) mortality and Hospital Episode Statistics (HES) admission data. Methods We identified cases of suicide and self-harm recorded using appropriate Read codes in the CPRD between 1998 and 2010 in patients aged ≥15 years. Suicides were defined as patients with Read codes for suicide recorded within 95 days of their death. International Classification of Diseases codes were used to identify suicides/hospital admissions for self-harm in the linked ONS and HES data sets. We compared CPRD-derived cases/incidence of suicide and self-harm with those identified from linked ONS mortality and HES data, national suicide incidence rates and published self-harm incidence data. Results Only 26.1% (n = 590) of the ‘true’ (ONS-confirmed) suicides were identified using Read codes. Furthermore, only 55.5% of Read code-identified suicides were confirmed as suicide by the ONS data. Of the HES-identified cases of self-harm, 68.4% were identified in the CPRD using Read codes. The CPRD self-harm rates based on Read codes had similar age and sex distributions to rates observed in self-harm hospital registers, although rates were underestimated in all age groups. Conclusions The CPRD recording of suicide using Read codes is unreliable, with significant inaccuracy (over- and under-reporting). Future CPRD suicide studies should use linked ONS mortality data. The under-reporting of self-harm appears to be less marked. PMID:23216533
Thompson, Robert; Tanimoto, Steve; Lyman, Ruby Dawn; Geselowitz, Kira; Begay, Kristin Kawena; Nielsen, Kathleen; Nagy, William; Abbott, Robert; Raskind, Marshall; Berninger, Virginia
2018-05-01
Children in grades 4 to 6 ( N =14) who despite early intervention had persisting dyslexia (impaired word reading and spelling) were assessed before and after computerized reading and writing instruction aimed at subword, word, and syntax skills shown in four prior studies to be effective for treating dyslexia. During the 12 two-hour sessions once a week after school they first completed HAWK Letters in Motion© for manuscript and cursive handwriting, HAWK Words in Motion© for phonological, orthographic, and morphological coding for word reading and spelling, and HAWK Minds in Motion© for sentence reading comprehension and written sentence composing. A reading comprehension activity in which sentences were presented one word at a time or one added word at a time was introduced. Next, to instill hope they could overcome their struggles with reading and spelling, they read and discussed stories about struggles of Buckminister Fuller who overcame early disabilities to make important contributions to society. Finally, they engaged in the new Kokopelli's World (KW)©, blocks-based online lessons, to learn computer coding in introductory programming by creating stories in sentence blocks (Tanimoto and Thompson 2016). Participants improved significantly in hallmark word decoding and spelling deficits of dyslexia, three syntax skills (oral construction, listening comprehension, and written composing), reading comprehension (with decoding as covariate), handwriting, orthographic and morphological coding, orthographic loop, and inhibition (focused attention). They answered more reading comprehension questions correctly when they had read sentences presented one word at a time (eliminating both regressions out and regressions in during saccades) than when presented one added word at a time (eliminating only regressions out during saccades). Indicators of improved self-efficacy that they could learn to read and write were observed. Reminders to pay attention and stay on task needed before adding computer coding were not needed after computer coding was added.
Harris, Margaret; Moreno, Constanza
2006-01-01
Nine children with severe-profound prelingual hearing loss and single-word reading scores not more than 10 months behind chronological age (Good Readers) were matched with 9 children whose reading lag was at least 15 months (Poor Readers). Good Readers had significantly higher spelling and reading comprehension scores. They produced significantly more phonetic errors (indicating the use of phonological coding) and more often correctly represented the number of syllables in spelling than Poor Readers. They also scored more highly on orthographic awareness and were better at speech reading. Speech intelligibility was the same in the two groups. Cluster analysis revealed that only three Good Readers showed strong evidence of phonetic coding in spelling although seven had good representation of syllables; only four had high orthographic awareness scores. However, all 9 children were good speech readers, suggesting that a phonological code derived through speech reading may underpin reading success for deaf children.
Sanchez, Christopher A; Jaeger, Allison J
2015-02-01
Perceptual manipulations, such as changes in font type or figure-ground contrast, have been shown to increase judgments of difficulty or effort related to the presented material. Previous theory has suggested that this is the result of changes in online processing or perhaps the post-hoc influence of perceived difficulty recalled at the time of judgment. These two experiments seek to examine by which mechanism (or both) the fluency effect is produced. Results indicate that disfluency does in fact change in situ reading behavior, and this change significantly mediates judgments. Eye movement analyses corroborate this suggestion and observe a difference in how people read a disfluent presentation. These findings support the notion that readers are using perceptual cues in their reading experiences to change how they interact with the material, which in turn produces the observed biases.
Joint Experimentation on Scalable Parallel Processors (JESPP)
2006-04-01
made use of local embedded relational databases, implemented using sqlite on each node of an SPP to execute queries and return results via an ad hoc ...rl.af.mil 12a. DISTRIBUTION / AVAILABILITY STATEENT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. 12b. DISTRIBUTION CODE 13. ABSTRACT...Experimentation Directorate (J9) required expansion of its joint semi-automated forces (JSAF) code capabilities; including number of entities, behavior complexity
Mobile Tracking and Location Awareness in Disaster Relief and Humanitarian Assistance Situations
2012-09-01
establishing mobile ad - hoc networks. Smartphones also have accelerometers that are used to detect any motion by the device. Furthermore, almost every...AVAILABILITY STATEMENT Approved for public release; distribution is unlimited 12b. DISTRIBUTION CODE A 13. ABSTRACT (maximum 200 words...Picture, Situational Awareness 15. NUMBER OF PAGES 55 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18. SECURITY
ERIC Educational Resources Information Center
Freeman, Nancy K.; Swick, Kevin J.
2007-01-01
In 2000 ACEI began an exploration of the potential role that a code of professional ethics might have in the Association. The Public Affairs Committee recommended that the Executive Board appoint an ad hoc Ethics Committee. That committee, under the leadership of Nita Barbour, accepted its charge to provide guidance to colleagues who struggle to…
Campbell, J R; Carpenter, P; Sneiderman, C; Cohn, S; Chute, C G; Warren, J
1997-01-01
To compare three potential sources of controlled clinical terminology (READ codes version 3.1, SNOMED International, and Unified Medical Language System (UMLS) version 1.6) relative to attributes of completeness, clinical taxonomy, administrative mapping, term definitions and clarity (duplicate coding rate). The authors assembled 1929 source concept records from a variety of clinical information taken from four medical centers across the United States. The source data included medical as well as ample nursing terminology. The source records were coded in each scheme by an investigator and checked by the coding scheme owner. The codings were then scored by an independent panel of clinicians for acceptability. Codes were checked for definitions provided with the scheme. Codes for a random sample of source records were analyzed by an investigator for "parent" and "child" codes within the scheme. Parent and child pairs were scored by an independent panel of medical informatics specialists for clinical acceptability. Administrative and billing code mapping from the published scheme were reviewed for all coded records and analyzed by independent reviewers for accuracy. The investigator for each scheme exhaustively searched a sample of coded records for duplications. SNOMED was judged to be significantly more complete in coding the source material than the other schemes (SNOMED* 70%; READ 57%; UMLS 50%; *p < .00001). SNOMED also had a richer clinical taxonomy judged by the number of acceptable first-degree relatives per coded concept (SNOMED* 4.56, UMLS 3.17; READ 2.14, *p < .005). Only the UMLS provided any definitions; these were found for 49% of records which had a coding assignment. READ and UMLS had better administrative mappings (composite score: READ* 40.6%; UMLS* 36.1%; SNOMED 20.7%, *p < .00001), and SNOMED had substantially more duplications of coding assignments (duplication rate: READ 0%; UMLS 4.2%; SNOMED* 13.9%, *p < .004) associated with a loss of clarity. No major terminology source can lay claim to being the ideal resource for a computer-based patient record. However, based upon this analysis of releases for April 1995, SNOMED International is considerably more complete, has a compositional nature and a richer taxonomy. Is suffers from less clarity, resulting from a lack of syntax and evolutionary changes in its coding scheme. READ has greater clarity and better mapping to administrative schemes (ICD-10 and OPCS-4), is rapidly changing and is less complete. UMLS is a rich lexical resource, with mappings to many source vocabularies. It provides definitions for many of its terms. However, due to the varying granularities and purposes of its source schemes, it has limitations for representation of clinical concepts within a computer-based patient record.
Reading on Paper and Screen among Senior Adults: Cognitive Map and Technophobia
Hou, Jinghui; Wu, Yijie; Harrell, Erin
2017-01-01
While the senior population has been increasingly engaged with reading on mobile technologies, research that specifically documents the impact of technologies on reading for this age group has still been lacking. The present study investigated how different reading media (screen versus paper) might result in different reading outcomes among older adults due to both cognitive and psychological factors. Using a laboratory experiment with 81participants aged 57 to 85, our results supported past research and showed the influence of cognitive map formation on readers’ feelings of fatigue. We contributed empirical evidence to the contention that reading on a screen could match that of reading from paper if the presentation of the text on screen resemble that of the print. Our findings also suggested that individual levels of technophobia was an important barrier to older adults’ effective use of mobile technologies for reading. In the post hoc analyses, we further showed that technophobia was correlated with technology experience, certain personality traits, and age. The present study highlights the importance of providing tailored support that helps older adults overcome psychological obstacles in using technologies. PMID:29312073
The Influence of Negative Advertising Frames on Political Cynicism and Politician Accountability.
ERIC Educational Resources Information Center
Schenck-Hamlin, William J.; Procter, David E.; Rumsey, Deborah J.
2000-01-01
Examines the influence of negative political advertising frames on the thoughts and feelings undergraduate students generate in response to campaign advertising. Finds that participants were more likely to generate cynical comments and hold politicians accountable for the country's ills when reading candidate theme advertisements than ad hoc issue…
31 CFR Appendix M to Subpart A - Financial Crimes Enforcement Network
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Financial Crimes Enforcement Network... Crimes Enforcement Network 1. In general. This appendix applies to the Financial Crimes Enforcement Network (FinCEN). 2. Public Reading Room. FinCEN will provide a room on an ad hoc basis when necessary...
31 CFR Appendix M to Subpart A - Financial Crimes Enforcement Network
Code of Federal Regulations, 2011 CFR
2011-07-01
... 31 Money and Finance: Treasury 1 2011-07-01 2011-07-01 false Financial Crimes Enforcement Network... Crimes Enforcement Network 1. In general. This appendix applies to the Financial Crimes Enforcement Network (FinCEN). 2. Public Reading Room. FinCEN will provide a room on an ad hoc basis when necessary...
ERIC Educational Resources Information Center
Moody, Barbara J., Ed.; And Others
A coding system for categorizing reading skills was developed in order to provide manuals for each grade level (preprimer through 6) that would aid teachers in locating materials on a particular skill by page number in a specific text. A skill code key of the skills usually taught at a given reading grade level is based on specific basal test…
ERIC Educational Resources Information Center
Moody, Barbara J., Ed.; And Others
A coding system for categorizing reading skills was developed in order to provide manuals for each grade level (preprimer through 6) that would aid teachers in locating materials on a particular skill by page number in a specific text. A skill code key of the skills usually taught at a given reading grade level is based on specific basal test…
Phase II Evaluation of Clinical Coding Schemes
Campbell, James R.; Carpenter, Paul; Sneiderman, Charles; Cohn, Simon; Chute, Christopher G.; Warren, Judith
1997-01-01
Abstract Objective: To compare three potential sources of controlled clinical terminology (READ codes version 3.1, SNOMED International, and Unified Medical Language System (UMLS) version 1.6) relative to attributes of completeness, clinical taxonomy, administrative mapping, term definitions and clarity (duplicate coding rate). Methods: The authors assembled 1929 source concept records from a variety of clinical information taken from four medical centers across the United States. The source data included medical as well as ample nursing terminology. The source records were coded in each scheme by an investigator and checked by the coding scheme owner. The codings were then scored by an independent panel of clinicians for acceptability. Codes were checked for definitions provided with the scheme. Codes for a random sample of source records were analyzed by an investigator for “parent” and “child” codes within the scheme. Parent and child pairs were scored by an independent panel of medical informatics specialists for clinical acceptability. Administrative and billing code mapping from the published scheme were reviewed for all coded records and analyzed by independent reviewers for accuracy. The investigator for each scheme exhaustively searched a sample of coded records for duplications. Results: SNOMED was judged to be significantly more complete in coding the source material than the other schemes (SNOMED* 70%; READ 57%; UMLS 50%; *p <.00001). SNOMED also had a richer clinical taxonomy judged by the number of acceptable first-degree relatives per coded concept (SNOMED* 4.56; UMLS 3.17; READ 2.14, *p <.005). Only the UMLS provided any definitions; these were found for 49% of records which had a coding assignment. READ and UMLS had better administrative mappings (composite score: READ* 40.6%; UMLS* 36.1%; SNOMED 20.7%, *p <. 00001), and SNOMED had substantially more duplications of coding assignments (duplication rate: READ 0%; UMLS 4.2%; SNOMED* 13.9%, *p <. 004) associated with a loss of clarity. Conclusion: No major terminology source can lay claim to being the ideal resource for a computer-based patient record. However, based upon this analysis of releases for April 1995, SNOMED International is considerably more complete, has a compositional nature and a richer taxonomy. It suffers from less clarity, resulting from a lack of syntax and evolutionary changes in its coding scheme. READ has greater clarity and better mapping to administrative schemes (ICD-10 and OPCS-4), is rapidly changing and is less complete. UMLS is a rich lexical resource, with mappings to many source vocabularies. It provides definitions for many of its terms. However, due to the varying granularities and purposes of its source schemes, it has limitations for representation of clinical concepts within a computer-based patient record. PMID:9147343
Teaching Reading to the Disadvantaged Adult.
ERIC Educational Resources Information Center
Dinnan, James A.; Ulmer, Curtis, Ed.
This manual is designed to assess the background of the individual and to bring him to the stage of unlocking the symbolic codes called Reading and Mathematics. The manual begins with Introduction to a Symbolic Code (The Thinking Process and The Key to Learning Basis), and continues with Basic Reading Skills (Readiness, Visual Discrimination,…
2012-03-01
by using a common communication technology there is no need to develop a complicated communications plan and generate an ad - hoc communications...DISTRIBUTION CODE A 13. ABSTRACT (maximum 200 words) Maintaining an accurate Common Operational Picture (COP) is a strategic requirement for...TERMS Android Programming, Cloud Computing, Common Operating Picture, Web Programing 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT
Is Phonology Bypassed in Normal or Dyslexic Development?
ERIC Educational Resources Information Center
Pennington, Bruce F.; And Others
1987-01-01
Two studies involving 215 subjects tested the hypothesis that orthographic coding bypasses phonological coding after the early stages of reading or spelling. It was found that nondyslexics continue to develop phonological coding skill until adulthood and rely on it for reading and spelling to a significantly greater extent than do dyslexics.…
Action and perception in literacy: A common-code for spelling and reading.
Houghton, George
2018-01-01
There is strong evidence that reading and spelling in alphabetical scripts depend on a shared representation (common-coding). However, computational models usually treat the two skills separately, producing a wide variety of proposals as to how the identity and position of letters is represented. This article treats reading and spelling in terms of the common-coding hypothesis for perception-action coupling. Empirical evidence for common representations in spelling-reading is reviewed. A novel version of the Start-End Competitive Queuing (SE-CQ) spelling model is introduced, and tested against the distribution of positional errors in Letter Position Dysgraphia, data from intralist intrusion errors in spelling to dictation, and dysgraphia because of nonperipheral neglect. It is argued that no other current model is equally capable of explaining this range of data. To pursue the common-coding hypothesis, the representation used in SE-CQ is applied, without modification, to the coding of letter identity and position for reading and lexical access, and a lexical matching rule for the representation is proposed (Start End Position Code model, SE-PC). Simulations show the model's compatibility with benchmark findings from form priming, its ability to account for positional effects in letter identification priming and the positional distribution of perseverative intrusion errors. The model supports the view that spelling and reading use a common orthographic description, providing a well-defined account of the major features of this representation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Kieffer, Michael J; Vukovic, Rose K
2012-01-01
Drawing on the cognitive and ecological domains within the componential model of reading, this longitudinal study explores heterogeneity in the sources of reading difficulties for language minority learners and native English speakers in urban schools. Students (N = 150) were followed from first through third grade and assessed annually on standardized English language and reading measures. Structural equation modeling was used to investigate the relative contributions of code-related and linguistic comprehension skills in first and second grade to third grade reading comprehension. Linguistic comprehension and the interaction between linguistic comprehension and code-related skills each explained substantial variation in reading comprehension. Among students with low reading comprehension, more than 80% demonstrated weaknesses in linguistic comprehension alone, whereas approximately 15% demonstrated weaknesses in both linguistic comprehension and code-related skills. Results were remarkably similar for the language minority learners and native English speakers, suggesting the importance of their shared socioeconomic backgrounds and schooling contexts.
ERIC Educational Resources Information Center
Miellet, Sebastien; Sparrow, Laurent
2004-01-01
This experiment employed the boundary paradigm during sentence reading to explore the nature of early phonological coding in reading. Fixation durations were shorter when the parafoveal preview was the correct word than when it was a spelling control pseudoword. In contrast, there was no significant difference between correct word and…
Write to Read: Investigating the Reading-Writing Relationship of Code-Level Early Literacy Skills
ERIC Educational Resources Information Center
Jones, Cindy D.; Reutzel, D. Ray
2015-01-01
The purpose of this study was to examine whether the code-related features used in current methods of writing instruction in kindergarten classrooms transfer reading outcomes for kindergarten students. We randomly assigned kindergarten students to 3 instructional groups: a writing workshop group, an interactive writing group, and a control group.…
ERIC Educational Resources Information Center
Moses, Annie M.; Golos, Debbie B.; Bennett, Colleen M.
2015-01-01
Early childhood educators need access to research-based practices and materials to help all children learn to read. Some theorists have suggested that individuals learn to read through "dual coding" (i.e., a verbal code and a nonverbal code) and may benefit from more than one route to literacy (e.g., dual coding theory). Although deaf…
Cross-layer model design in wireless ad hoc networks for the Internet of Things.
Yang, Xin; Wang, Ling; Xie, Jian; Zhang, Zhaolin
2018-01-01
Wireless ad hoc networks can experience extreme fluctuations in transmission traffic in the Internet of Things, which is widely used today. Currently, the most crucial issues requiring attention for wireless ad hoc networks are making the best use of low traffic periods, reducing congestion during high traffic periods, and improving transmission performance. To solve these problems, the present paper proposes a novel cross-layer transmission model based on decentralized coded caching in the physical layer and a content division multiplexing scheme in the media access control layer. Simulation results demonstrate that the proposed model effectively addresses these issues by substantially increasing the throughput and successful transmission rate compared to existing protocols without a negative influence on delay, particularly for large scale networks under conditions of highly contrasting high and low traffic periods.
Cross-layer model design in wireless ad hoc networks for the Internet of Things
Wang, Ling; Xie, Jian; Zhang, Zhaolin
2018-01-01
Wireless ad hoc networks can experience extreme fluctuations in transmission traffic in the Internet of Things, which is widely used today. Currently, the most crucial issues requiring attention for wireless ad hoc networks are making the best use of low traffic periods, reducing congestion during high traffic periods, and improving transmission performance. To solve these problems, the present paper proposes a novel cross-layer transmission model based on decentralized coded caching in the physical layer and a content division multiplexing scheme in the media access control layer. Simulation results demonstrate that the proposed model effectively addresses these issues by substantially increasing the throughput and successful transmission rate compared to existing protocols without a negative influence on delay, particularly for large scale networks under conditions of highly contrasting high and low traffic periods. PMID:29734355
NASA Astrophysics Data System (ADS)
Mense, Mario; Schindelhauer, Christian
We introduce the Read-Write-Coding-System (RWC) - a very flexible class of linear block codes that generate efficient and flexible erasure codes for storage networks. In particular, given a message x of k symbols and a codeword y of n symbols, an RW code defines additional parameters k ≤ r,w ≤ n that offer enhanced possibilities to adjust the fault-tolerance capability of the code. More precisely, an RWC provides linear left(n,k,dright)-codes that have (a) minimum distance d = n - r + 1 for any two codewords, and (b) for each codeword there exists a codeword for each other message with distance of at most w. Furthermore, depending on the values r,w and the code alphabet, different block codes such as parity codes (e.g. RAID 4/5) or Reed-Solomon (RS) codes (if r = k and thus, w = n) can be generated. In storage networks in which I/O accesses are very costly and redundancy is crucial, this flexibility has considerable advantages as r and w can optimally be adapted to read or write intensive applications; only w symbols must be updated if the message x changes completely, what is different from other codes which always need to rewrite y completely as x changes. In this paper, we first state a tight lower bound and basic conditions for all RW codes. Furthermore, we introduce special RW codes in which all mentioned parameters are adjustable even online, that is, those RW codes are adaptive to changing demands. At last, we point out some useful properties regarding safety and security of the stored data.
31 CFR Appendix D to Subpart A of... - United States Secret Service
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false United States Secret Service D...—United States Secret Service 1. In general. This appendix applies to the United States Secret Service. 2. Public reading room. The United States Secret Service will provide a room on an ad hoc basis when...
31 CFR Appendix D to Subpart A of... - United States Secret Service
Code of Federal Regulations, 2011 CFR
2011-07-01
... 31 Money and Finance: Treasury 1 2011-07-01 2011-07-01 false United States Secret Service D...—United States Secret Service 1. In general. This appendix applies to the United States Secret Service. 2. Public reading room. The United States Secret Service will provide a room on an ad hoc basis when...
31 CFR Appendix D to Subpart A of... - United States Secret Service
Code of Federal Regulations, 2013 CFR
2013-07-01
... 31 Money and Finance: Treasury 1 2013-07-01 2013-07-01 false United States Secret Service D...—United States Secret Service 1. In general. This appendix applies to the United States Secret Service. 2. Public reading room. The United States Secret Service will provide a room on an ad hoc basis when...
31 CFR Appendix D to Subpart A of... - United States Secret Service
Code of Federal Regulations, 2012 CFR
2012-07-01
... 31 Money and Finance: Treasury 1 2012-07-01 2012-07-01 false United States Secret Service D...—United States Secret Service 1. In general. This appendix applies to the United States Secret Service. 2. Public reading room. The United States Secret Service will provide a room on an ad hoc basis when...
31 CFR Appendix D to Subpart A of... - United States Secret Service
Code of Federal Regulations, 2014 CFR
2014-07-01
... 31 Money and Finance: Treasury 1 2014-07-01 2014-07-01 false United States Secret Service D...—United States Secret Service 1. In general. This appendix applies to the United States Secret Service. 2. Public reading room. The United States Secret Service will provide a room on an ad hoc basis when...
A Reaction to the 2007 MLA Report
ERIC Educational Resources Information Center
Bernhardt, Elizabeth B.
2010-01-01
Mortimer Adler and Charles van Doren wisely remind in "How to Read a Book" (1972) that readers must come to terms with an author "before" beginning the interpretation process. Following this logic, the first question that should be posed about "Foreign Languages and Higher Education: New Structures for a Changed World" by MLA Ad Hoc Committee on…
The Design and Implementation of a Read Prediction Buffer
1992-12-01
City, State, and ZIP Code) 7b ADDRESS (City, State. and ZIP Code) 8a. NAME OF FUNDING /SPONSORING 8b. OFFICE SYMBOL 9 PROCUREMENT INSTRUMENT... 9 E. THESIS STRUCTURE.. . .... ............... 9 II. READ PREDICTION ALGORITHM AND BUFFER DESIGN 10 A. THE READ PREDICTION ALGORITHM...29 Figure 9 . Basic Multiplexer Cell .... .......... .. 30 Figure 10. Block Diagram Simulation Labels ......... 38 viii I. INTRODUCTION A
ERIC Educational Resources Information Center
Elbro, Carsten; And Others
1994-01-01
Compared to controls, adults (n=102) who reported a history of difficulties in learning to read were disabled in phonological coding, but less disabled in reading comprehension. Adults with poor phonological coding skills had basic deficits in phonological representations of spoken words, even when semantic word knowledge, phonemic awareness,…
MAC Protocol for Ad Hoc Networks Using a Genetic Algorithm
Elizarraras, Omar; Panduro, Marco; Méndez, Aldo L.
2014-01-01
The problem of obtaining the transmission rate in an ad hoc network consists in adjusting the power of each node to ensure the signal to interference ratio (SIR) and the energy required to transmit from one node to another is obtained at the same time. Therefore, an optimal transmission rate for each node in a medium access control (MAC) protocol based on CSMA-CDMA (carrier sense multiple access-code division multiple access) for ad hoc networks can be obtained using evolutionary optimization. This work proposes a genetic algorithm for the transmission rate election considering a perfect power control, and our proposition achieves improvement of 10% compared with the scheme that handles the handshaking phase to adjust the transmission rate. Furthermore, this paper proposes a genetic algorithm that solves the problem of power combining, interference, data rate, and energy ensuring the signal to interference ratio in an ad hoc network. The result of the proposed genetic algorithm has a better performance (15%) compared to the CSMA-CDMA protocol without optimizing. Therefore, we show by simulation the effectiveness of the proposed protocol in terms of the throughput. PMID:25140339
ERIC Educational Resources Information Center
Holbrook, M. Cay; MacCuspie, P. Ann
2010-01-01
Braille-reading mathematicians, scientists, and computer scientists were asked to examine the usability of the Unified English Braille Code (UEB) for technical materials. They had little knowledge of the code prior to the study. The research included two reading tasks, a short tutorial about UEB, and a focus group. The results indicated that the…
Kurbanoglu, Serap; Boustany, Joumana
2018-01-01
This study reports the descriptive and inferential statistical findings of a survey of academic reading format preferences and behaviors of 10,293 tertiary students worldwide. The study hypothesized that country-based differences in schooling systems, socioeconomic development, culture or other factors might have an influence on preferred formats, print or electronic, for academic reading, as well as the learning engagement behaviors of students. The main findings are that country of origin has little to no relationship with or effect on reading format preferences of university students, and that the broad majority of students worldwide prefer to read academic course materials in print. The majority of participants report better focus and retention of information presented in print formats, and more frequently prefer print for longer texts. Additional demographic and post-hoc analysis suggests that format preference has a small relationship with academic rank. The relationship between task demands, format preferences and reading comprehension are discussed. Additional outcomes and implications for the fields of education, psychology, computer science, information science and human-computer interaction are considered. PMID:29847560
Tsuchiya, Mariko; Amano, Kojiro; Abe, Masaya; Seki, Misato; Hase, Sumitaka; Sato, Kengo; Sakakibara, Yasubumi
2016-06-15
Deep sequencing of the transcripts of regulatory non-coding RNA generates footprints of post-transcriptional processes. After obtaining sequence reads, the short reads are mapped to a reference genome, and specific mapping patterns can be detected called read mapping profiles, which are distinct from random non-functional degradation patterns. These patterns reflect the maturation processes that lead to the production of shorter RNA sequences. Recent next-generation sequencing studies have revealed not only the typical maturation process of miRNAs but also the various processing mechanisms of small RNAs derived from tRNAs and snoRNAs. We developed an algorithm termed SHARAKU to align two read mapping profiles of next-generation sequencing outputs for non-coding RNAs. In contrast with previous work, SHARAKU incorporates the primary and secondary sequence structures into an alignment of read mapping profiles to allow for the detection of common processing patterns. Using a benchmark simulated dataset, SHARAKU exhibited superior performance to previous methods for correctly clustering the read mapping profiles with respect to 5'-end processing and 3'-end processing from degradation patterns and in detecting similar processing patterns in deriving the shorter RNAs. Further, using experimental data of small RNA sequencing for the common marmoset brain, SHARAKU succeeded in identifying the significant clusters of read mapping profiles for similar processing patterns of small derived RNA families expressed in the brain. The source code of our program SHARAKU is available at http://www.dna.bio.keio.ac.jp/sharaku/, and the simulated dataset used in this work is available at the same link. Accession code: The sequence data from the whole RNA transcripts in the hippocampus of the left brain used in this work is available from the DNA DataBank of Japan (DDBJ) Sequence Read Archive (DRA) under the accession number DRA004502. yasu@bio.keio.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Circular codes revisited: a statistical approach.
Gonzalez, D L; Giannerini, S; Rosa, R
2011-04-21
In 1996 Arquès and Michel [1996. A complementary circular code in the protein coding genes. J. Theor. Biol. 182, 45-58] discovered the existence of a common circular code in eukaryote and prokaryote genomes. Since then, circular code theory has provoked great interest and underwent a rapid development. In this paper we discuss some theoretical issues related to the synchronization properties of coding sequences and circular codes with particular emphasis on the problem of retrieval and maintenance of the reading frame. Motivated by the theoretical discussion, we adopt a rigorous statistical approach in order to try to answer different questions. First, we investigate the covering capability of the whole class of 216 self-complementary, C(3) maximal codes with respect to a large set of coding sequences. The results indicate that, on average, the code proposed by Arquès and Michel has the best covering capability but, still, there exists a great variability among sequences. Second, we focus on such code and explore the role played by the proportion of the bases by means of a hierarchy of permutation tests. The results show the existence of a sort of optimization mechanism such that coding sequences are tailored as to maximize or minimize the coverage of circular codes on specific reading frames. Such optimization clearly relates the function of circular codes with reading frame synchronization. Copyright © 2011 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Rupley, William H.; Paige, David D.; Rasinski, Timothy V.; Slough, Scott W.
2015-01-01
Pavio's Dual-Coding Theory (1991) and Mayer's Multimedia Principal (2000) form the foundation for proposing a multi-coding theory centered around Multi-Touch Tablets and the newest generation of e-textbooks to scaffold struggling readers in reading and learning from science textbooks. Using E. O. Wilson's "Life on Earth: An Introduction"…
Wright, Imogen A.; Travers, Simon A.
2014-01-01
The challenge presented by high-throughput sequencing necessitates the development of novel tools for accurate alignment of reads to reference sequences. Current approaches focus on using heuristics to map reads quickly to large genomes, rather than generating highly accurate alignments in coding regions. Such approaches are, thus, unsuited for applications such as amplicon-based analysis and the realignment phase of exome sequencing and RNA-seq, where accurate and biologically relevant alignment of coding regions is critical. To facilitate such analyses, we have developed a novel tool, RAMICS, that is tailored to mapping large numbers of sequence reads to short lengths (<10 000 bp) of coding DNA. RAMICS utilizes profile hidden Markov models to discover the open reading frame of each sequence and aligns to the reference sequence in a biologically relevant manner, distinguishing between genuine codon-sized indels and frameshift mutations. This approach facilitates the generation of highly accurate alignments, accounting for the error biases of the sequencing machine used to generate reads, particularly at homopolymer regions. Performance improvements are gained through the use of graphics processing units, which increase the speed of mapping through parallelization. RAMICS substantially outperforms all other mapping approaches tested in terms of alignment quality while maintaining highly competitive speed performance. PMID:24861618
2015-06-01
raspberry pi , robotic operation system (ros), arduino 15. NUMBER OF PAGES 123 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT...51 2. Raspberry Pi ...52 Figure 21. The Raspberry Pi B+ model, from [24
1982-05-06
access 99 6.3.2 Input/output interrupt code 99 register (IOIC) 6.3.2.1 Read input/output interrupt 100 code, level 1 (OAOOOH) 6.3.2.2 Read input...output interrupt 100 code, level 2 (OA001H) 6.3.3 Console input/output 100 6.3.3.1 Clear console (4001H) 100 6.3.3.2 Console output (4000H) 100 6.3.3.3...Console input (COOOH) 100 6.3.3.4 Read console status (C0O01H) 100 6.3.4 Memory fault status register (MFSR) 100 6.3.4.1 Read memory fault register
An evaluation of the lag of accommodation using photorefraction.
Seidemann, Anne; Schaeffel, Frank
2003-02-01
The lag of accommodation which occurs in most human subjects during reading has been proposed to explain the association between reading and myopia. However, the measured lags are variable among different published studies and current knowledge on its magnitude rests largely on measurements with the Canon R-1 autorefractor. Therefore, we have measured it with another technique, eccentric infrared photorefraction (the PowerRefractor), and studied how it can be modified. Particular care was taken to ensure correct calibration of the instrument. Ten young adult subjects were refracted both in the fixation axis of the right eye and from the midline between both eyes, while they read text both monocularly and binocularly at 1.5, 2, 3, 4 and 5 D distance ("group 1"). A second group of 10 subjects ("group 2"), measured from the midline between both eyes, was studied to analyze the effects of binocular vs monocular vision, addition of +1 or +2 D lenses, and of letter size. Spherical equivalents (SE) were analyzed in all cases. The lag of accommodation was variable among subjects (standard deviations among groups and viewing distances ranging from 0.18 to 1.07 D) but was significant when the measurements were done in the fixation axis (0.35 D at 3 D target distance to 0.60 D at 5 D with binocular vision; p<0.01 or better all cases). Refracting from the midline between both eyes tended to underestimate the lag of accommodation although this was significant only at 5 D (ANOVA: p<0.0001, post hoc t-test: p<0.05). There was a small improvement in accommodation precision with binocular compared to monocular viewing but significance was reached only for the 5 D reading target (group 1--lags for a 3/4/5 D target: 0.35 vs 0.41 D/0.48 vs 0.47 D/0.60 vs 0.66 D, ANOVA: p<0.0001, post hoc t-test: p<0.05; group 2--0.29 vs 0.12 D, 0.33 vs 0.16 D, 0.23 vs -0.31 D, ANOVA: p<0.0001, post hoc t-test: p<0.05). Adjusting the letter height for constant angular subtense (0.2 deg) induced scarcely more accommodation than keeping letter size constantly at 3.5 mm (ANOVA: p<0.0001, post hoc t-test: n.s.). Positive trial lenses reduced the lag of accommodation under monocular viewing conditions and even reversed it with binocular vision. After consideration of possible sources of measurement error, the lag of accommodation measured with photorefraction at 3 D (0.41 D SE monocular and 0.35 D SE binocular) was in the range of published values from the Canon R-1 autorefractor. With the measured lag, simulations of the retinal images for a diffraction limited eye suggest surprisingly poor letter contrast on the retina.
Hines, Michael L; Davison, Andrew P; Muller, Eilif
2009-01-01
The NEURON simulation program now allows Python to be used, alone or in combination with NEURON's traditional Hoc interpreter. Adding Python to NEURON has the immediate benefit of making available a very extensive suite of analysis tools written for engineering and science. It also catalyzes NEURON software development by offering users a modern programming tool that is recognized for its flexibility and power to create and maintain complex programs. At the same time, nothing is lost because all existing models written in Hoc, including graphical user interface tools, continue to work without change and are also available within the Python context. An example of the benefits of Python availability is the use of the xml module in implementing NEURON's Import3D and CellBuild tools to read MorphML and NeuroML model specifications.
Hines, Michael L.; Davison, Andrew P.; Muller, Eilif
2008-01-01
The NEURON simulation program now allows Python to be used, alone or in combination with NEURON's traditional Hoc interpreter. Adding Python to NEURON has the immediate benefit of making available a very extensive suite of analysis tools written for engineering and science. It also catalyzes NEURON software development by offering users a modern programming tool that is recognized for its flexibility and power to create and maintain complex programs. At the same time, nothing is lost because all existing models written in Hoc, including graphical user interface tools, continue to work without change and are also available within the Python context. An example of the benefits of Python availability is the use of the xml module in implementing NEURON's Import3D and CellBuild tools to read MorphML and NeuroML model specifications. PMID:19198661
Average Likelihood Methods of Classification of Code Division Multiple Access (CDMA)
2016-05-01
case of cognitive radio applications. Modulation classification is part of a broader problem known as blind or uncooperative demodulation the goal of...Introduction 2 2.1 Modulation Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.2 Research Objectives...6 3 Modulation Classification Methods 7 3.0.1 Ad Hoc
Social Information Processing Analysis (SIPA): Coding Ongoing Human Communication.
ERIC Educational Resources Information Center
Fisher, B. Aubrey; And Others
1979-01-01
The purpose of this paper is to present a new analytical system to be used in communication research. Unlike many existing systems devised ad hoc, this research tool, a system for interaction analysis, is embedded in a conceptual rationale based on modern systems theory. (Author)
Performance Theories for Sentence Coding: Some Quantitative Models
ERIC Educational Resources Information Center
Aaronson, Doris; And Others
1977-01-01
This study deals with the patterns of word-by-word reading times over a sentence when the subject must code the linguistic information sufficiently for immediate verbatim recall. A class of quantitative models is considered that would account for reading times at phrase breaks. (Author/RM)
Bijective transformation circular codes and nucleotide exchanging RNA transcription.
Michel, Christian J; Seligmann, Hervé
2014-04-01
The C(3) self-complementary circular code X identified in genes of prokaryotes and eukaryotes is a set of 20 trinucleotides enabling reading frame retrieval and maintenance, i.e. a framing code (Arquès and Michel, 1996; Michel, 2012, 2013). Some mitochondrial RNAs correspond to DNA sequences when RNA transcription systematically exchanges between nucleotides (Seligmann, 2013a,b). We study here the 23 bijective transformation codes ΠX of X which may code nucleotide exchanging RNA transcription as suggested by this mitochondrial observation. The 23 bijective transformation codes ΠX are C(3) trinucleotide circular codes, seven of them are also self-complementary. Furthermore, several correlations are observed between the Reading Frame Retrieval (RFR) probability of bijective transformation codes ΠX and the different biological properties of ΠX related to their numbers of RNAs in GenBank's EST database, their polymerization rate, their number of amino acids and the chirality of amino acids they code. Results suggest that the circular code X with the functions of reading frame retrieval and maintenance in regular RNA transcription, may also have, through its bijective transformation codes ΠX, the same functions in nucleotide exchanging RNA transcription. Associations with properties such as amino acid chirality suggest that the RFR of X and its bijective transformations molded the origins of the genetic code's machinery. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Perspective: Semantic Data Management for the Home
2008-05-01
8 the more flexible policies found in many management tasks must be made in an ad - hoc fashion at the application level, leading to a loss of user...this mismatch as a significant source of disorganization: Aaron: “I’m very conscious about the way I name things; I have a coding system. But the...thing is, that doesn’t work if you have everything spread out. The coding system makes sense when there’s a lot of other things around, but not when it’s
2015-06-01
events was ad - hoc and problematic due to time constraints and changing requirements. Determining errors in context and heuristics required expertise...area code ) 410-278-4678 Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 iii Contents List of Figures iv 1. Introduction 1...reduction code ...........8 1 1. Introduction Data reduction for analysis of Command, Control, Communications, and Computer (C4) network tests
Perea, Manuel; Jiménez, María; Martín-Suesta, Miguel; Gómez, Pablo
2015-04-01
This article explores how letter position coding is attained during braille reading and its implications for models of word recognition. When text is presented visually, the reading process easily adjusts to the jumbling of some letters (jugde-judge), with a small cost in reading speed. Two explanations have been proposed: One relies on a general mechanism of perceptual uncertainty at the visual level, and the other focuses on the activation of an abstract level of representation (i.e., bigrams) that is shared by all orthographic codes. Thus, these explanations make differential predictions about reading in a tactile modality. In the present study, congenitally blind readers read sentences presented on a braille display that tracked the finger position. The sentences either were intact or involved letter transpositions. A parallel experiment was conducted in the visual modality. Results revealed a substantially greater reading cost for the sentences with transposed-letter words in braille readers. In contrast with the findings with sighted readers, in which there is a cost of transpositions in the external (initial and final) letters, the reading cost in braille readers occurs serially, with a large cost for initial letter transpositions. Thus, these data suggest that the letter-position-related effects in visual word recognition are due to the characteristics of the visual stream.
NASA Astrophysics Data System (ADS)
Wang, H. H.; Shi, Y. P.; Li, X. H.; Ni, K.; Zhou, Q.; Wang, X. H.
2018-03-01
In this paper, a scheme to measure the position of precision stages, with a high precision, is presented. The encoder is composed of a scale grating and a compact two-probe reading head, to read the zero position pulse signal and continuous incremental displacement signal. The scale grating contains different codes, multiple reference codes with different spacing superimposed onto the incremental grooves with an equal spacing structure. The codes of reference mask in the reading head is the same with the reference codes on the scale grating, and generate pulse signal to locate the reference position primarily when the reading head moves along the scale grating. After locating the reference position in a section by means of the pulse signal, the reference position can be located precisely with the amplitude of the incremental displacement signal. A kind of reference codes and scale grating were designed, and experimental results show that the primary precision of the design achieved is 1 μ m. The period of the incremental signal is 1μ m, and 1000/N nm precision can be achieved by subdivide the incremental signal in N times.
75 FR 56528 - EPA's Role in Advancing Sustainable Products
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-16
... services (NAICS code 72). Other services, except public administration (NAICS code 81). Public... Rm. 3334, EPA West Bldg., 1301 Constitution Ave., NW., Washington, DC. The EPA/DC Public Reading Room... holidays. The telephone number of the EPA/DC Public Reading Room is (202) 566-1744, and the telephone...
Wright, Imogen A; Travers, Simon A
2014-07-01
The challenge presented by high-throughput sequencing necessitates the development of novel tools for accurate alignment of reads to reference sequences. Current approaches focus on using heuristics to map reads quickly to large genomes, rather than generating highly accurate alignments in coding regions. Such approaches are, thus, unsuited for applications such as amplicon-based analysis and the realignment phase of exome sequencing and RNA-seq, where accurate and biologically relevant alignment of coding regions is critical. To facilitate such analyses, we have developed a novel tool, RAMICS, that is tailored to mapping large numbers of sequence reads to short lengths (<10 000 bp) of coding DNA. RAMICS utilizes profile hidden Markov models to discover the open reading frame of each sequence and aligns to the reference sequence in a biologically relevant manner, distinguishing between genuine codon-sized indels and frameshift mutations. This approach facilitates the generation of highly accurate alignments, accounting for the error biases of the sequencing machine used to generate reads, particularly at homopolymer regions. Performance improvements are gained through the use of graphics processing units, which increase the speed of mapping through parallelization. RAMICS substantially outperforms all other mapping approaches tested in terms of alignment quality while maintaining highly competitive speed performance. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Active Cooperation Between Primary Users and Cognitive Radio Users in Heterogeneous Ad-Hoc Networks
2012-04-01
processing to wireless communications and networking, including space-time coding and modulation for MIMO wireless communications, MIMO - OFDM systems, and...multiinput-multioutput ( MIMO ) system that can significantly increase the link capacity and realize a new form of spatial diversity which has been termed
Reading skills in Persian deaf children with cochlear implants and hearing aids.
Rezaei, Mohammad; Rashedi, Vahid; Morasae, Esmaeil Khedmati
2016-10-01
Reading skills are necessary for educational development in children. Many studies have shown that children with hearing loss often experience delays in reading. This study aimed to examine reading skills of Persian deaf children with cochlear implant and hearing aid and compare them with normal hearing counterparts. The sample consisted of 72 s and third grade Persian-speaking children aged 8-12 years. They were divided into three equal groups including 24 children with cochlear implant (CI), 24 children with hearing aid (HA), and 24 children with normal hearing (NH). Reading performance of participants was evaluated by the "Nama" reading test. "Nama" provides normative data for hearing and deaf children and consists of 10 subtests and the sum of the scores is regarded as reading performance score. Results of ANOVA on reading test showed that NH children had significantly better reading performance than deaf children with CI and HA in both grades (P < 0.001). Post-hoc analysis, using Tukey test, indicated that there was no significant difference between HA and CI groups in terms of non-word reading, word reading, and word comprehension skills (respectively, P = 0.976, P = 0.988, P = 0.998). Considering the findings, cochlear implantation is not significantly more effective than hearing aid for improvement of reading abilities. It is clear that even with considerable advances in hearing aid technology, many deaf children continue to find literacy a challenging struggle. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Tsai, Jie-Li; Lee, Chia-Ying; Tzeng, Ovid J. L.; Hung, Daisy L.; Yen, Nai-Shing
2004-01-01
The role of phonological coding for character identification was examined with the benefit of processing parafoveal characters in eye fixations while reading Chinese sentences. In Experiment 1, the orthogonal manipulation of phonological and orthographic similarity can separate two types of phonological benefits for homophonic previews, according…
Computer Simulation of Reading.
ERIC Educational Resources Information Center
Leton, Donald A.
In recent years, coding and decoding have been claimed to be the processes for converting one language form to another. But there has been little effort to locate these processes in the human learner or to identify the nature of the internal codes. Computer simulation of reading is useful because the similarities in the human reception and…
Assessment of Online Patient Education Materials from Major Dermatologic Associations
John, Ann M.; John, Elizabeth S.; Hansberry, David R.
2016-01-01
Objective: Patients increasingly use the internet to find medical information regarding their conditions and treatments. Physicians often supplement visits with written education materials. Online patient education materials from major dermatologic associations should be written at appropriate reading levels to optimize utility for patients. The purpose of this study is to assess online patient education materials from major dermatologic associations and determine if they are written at the fourth to sixth grade level recommended by the American Medical Association and National Institutes of Health. Design: This is a descriptive and correlational design. Setting: Academic institution. Participants/measurements: Patient education materials from eight major dermatology websites were downloaded and assessed using 10 readability scales. A one-way analysis of variance and Tukey’s Honestly Statistically Different post hoc analysis were performed to determine the difference in readability levels between websites. Results: Two hundred and sixty patient education materials were assessed. Collectively, patient education materials were written at a mean grade level of 11.13, with 65.8 percent of articles written above a tenth grade level and no articles written at the American Medical Association/National Institutes of Health recommended grade levels. Analysis of variance demonstrated a significant difference between websites for each reading scale (p<0.001), which was confirmed with Tukey’s Honestly Statistically Different post hoc analysis. Conclusion: Online patient education materials from major dermatologic association websites are written well above recommended reading levels. Associations should consider revising patient education materials to allow more effective patient comprehension. (J ClinAesthet Dermatol. 2016;9(9):23–28.) PMID:27878059
Assessment of Online Patient Education Materials from Major Dermatologic Associations.
John, Ann M; John, Elizabeth S; Hansberry, David R; Lambert, William Clark
2016-09-01
Objective: Patients increasingly use the internet to find medical information regarding their conditions and treatments. Physicians often supplement visits with written education materials. Online patient education materials from major dermatologic associations should be written at appropriate reading levels to optimize utility for patients. The purpose of this study is to assess online patient education materials from major dermatologic associations and determine if they are written at the fourth to sixth grade level recommended by the American Medical Association and National Institutes of Health. Design: This is a descriptive and correlational design. Setting: Academic institution. Participants/measurements: Patient education materials from eight major dermatology websites were downloaded and assessed using 10 readability scales. A one-way analysis of variance and Tukey's Honestly Statistically Different post hoc analysis were performed to determine the difference in readability levels between websites. Results: Two hundred and sixty patient education materials were assessed. Collectively, patient education materials were written at a mean grade level of 11.13, with 65.8 percent of articles written above a tenth grade level and no articles written at the American Medical Association/National Institutes of Health recommended grade levels. Analysis of variance demonstrated a significant difference between websites for each reading scale (p<0.001), which was confirmed with Tukey's Honestly Statistically Different post hoc analysis. Conclusion: Online patient education materials from major dermatologic association websites are written well above recommended reading levels. Associations should consider revising patient education materials to allow more effective patient comprehension. (J ClinAesthet Dermatol. 2016;9(9):23-28.).
Whitford, Veronica; Titone, Debra
2016-02-01
This study addressed a central yet previously unexplored issue in the psychological science of aging, namely, whether the advantages of healthy aging (e.g., greater lifelong experience with language) or disadvantages (e.g., decreases in cognitive and sensory processing) drive L1 and L2 reading performance in bilingual older adults. To this end, we used a gaze-contingent moving window paradigm to examine both global aspects of reading fluency (e.g., reading rates, number of regressions) and the perceptual span (i.e., allocation of visual attention into the parafovea) in bilingual older adults during L1 and L2 sentence reading, as a function of individual differences in current L2 experience. Across the L1 and L2, older adults exhibited reduced reading fluency (e.g., slower reading rates, more regressions), but a similar perceptual span compared with matched younger adults. Also similar to matched younger adults, older adults' reading fluency was lower for L2 reading than for L1 reading as a function of current L2 experience. Specifically, greater current L2 experience increased L2 reading fluency, but decreased L1 reading fluency (for global reading measures only). Taken together, the dissociation between intact perceptual span and impaired global reading measures suggests that older adults may prioritize parafoveal processing despite age-related encoding difficulties. Consistent with this interpretation, post hoc analyses revealed that older adults with higher versus lower executive control were more likely to adopt this strategy. (c) 2016 APA, all rights reserved).
Semantic and phonological coding in poor and normal readers.
Vellutino, F R; Scanlon, D M; Spearing, D
1995-02-01
Three studies were conducted evaluating semantic and phonological coding deficits as alternative explanations of reading disability. In the first study, poor and normal readers in second and sixth grade were compared on various tests evaluating semantic development as well as on tests evaluating rapid naming and pseudoword decoding as independent measures of phonological coding ability. In a second study, the same subjects were given verbal memory and visual-verbal learning tasks using high and low meaning words as verbal stimuli and Chinese ideographs as visual stimuli. On the semantic tasks, poor readers performed below the level of the normal readers only at the sixth grade level, but, on the rapid naming and pseudoword learning tasks, they performed below the normal readers at the second as well as at the sixth grade level. On both the verbal memory and visual-verbal learning tasks, performance in poor readers approximated that of normal readers when the word stimuli were high in meaning but not when they were low in meaning. These patterns were essentially replicated in a third study that used some of the same semantic and phonological measures used in the first experiment, and verbal memory and visual-verbal learning tasks that employed word lists and visual stimuli (novel alphabetic characters) that more closely approximated those used in learning to read. It was concluded that semantic coding deficits are an unlikely cause of reading difficulties in most poor readers at the beginning stages of reading skills acquisition, but accrue as a consequence of prolonged reading difficulties in older readers. It was also concluded that phonological coding deficits are a probable cause of reading difficulties in most poor readers.
NASA Astrophysics Data System (ADS)
Park, Joon-Sang; Lee, Uichin; Oh, Soon Young; Gerla, Mario; Lun, Desmond Siumen; Ro, Won Woo; Park, Joonseok
Vehicular ad hoc networks (VANET) aims to enhance vehicle navigation safety by providing an early warning system: any chance of accidents is informed through the wireless communication between vehicles. For the warning system to work, it is crucial that safety messages be reliably delivered to the target vehicles in a timely manner and thus reliable and timely data dissemination service is the key building block of VANET. Data mulling technique combined with three strategies, network codeing, erasure coding and repetition coding, is proposed for the reliable and timely data dissemination service. Particularly, vehicles in the opposite direction on a highway are exploited as data mules, mobile nodes physically delivering data to destinations, to overcome intermittent network connectivity cause by sparse vehicle traffic. Using analytic models, we show that in such a highway data mulling scenario the network coding based strategy outperforms erasure coding and repetition based strategies.
Predictors of Foreign Language Reading Comprehension in a Hypermedia Reading Environment
ERIC Educational Resources Information Center
Akbulut, Yavuz
2008-01-01
This study investigated factors affecting second/foreign language (L2) reading comprehension in a hypermedia environment within the theoretical framework of dual coding and cognitive load theories, and interactive models of L2 reading. The independent variables were reading ability, topic interest, prior topical knowledge, and the number of times…
Error probability for RFID SAW tags with pulse position coding and peak-pulse detection.
Shmaliy, Yuriy S; Plessky, Victor; Cerda-Villafaña, Gustavo; Ibarra-Manzano, Oscar
2012-11-01
This paper addresses the code reading error probability (EP) in radio-frequency identification (RFID) SAW tags with pulse position coding (PPC) and peak-pulse detection. EP is found in a most general form, assuming M groups of codes with N slots each and allowing individual SNRs in each slot. The basic case of zero signal in all off-pulses and equal signals in all on-pulses is investigated in detail. We show that if a SAW-tag with PPC is designed such that the spurious responses are attenuated by more than 20 dB below on-pulses, then EP can be achieved at the level of 10(-8) (one false read per 108 readings) with SNR >17 dB for any reasonable M and N. The tag reader range is estimated as a function of the transmitted power and EP.
Phonological coding during reading.
Leinenger, Mallorie
2014-11-01
The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Phonological coding during reading
Leinenger, Mallorie
2014-01-01
The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679
Data Delivery Method Based on Neighbor Nodes' Information in a Mobile Ad Hoc Network
Hayashi, Takuma; Taenaka, Yuzo; Okuda, Takeshi; Yamaguchi, Suguru
2014-01-01
This paper proposes a data delivery method based on neighbor nodes' information to achieve reliable communication in a mobile ad hoc network (MANET). In a MANET, it is difficult to deliver data reliably due to instabilities in network topology and wireless network condition which result from node movement. To overcome such unstable communication, opportunistic routing and network coding schemes have lately attracted considerable attention. Although an existing method that employs such schemes, MAC-independent opportunistic routing and encoding (MORE), Chachulski et al. (2007), improves the efficiency of data delivery in an unstable wireless mesh network, it does not address node movement. To efficiently deliver data in a MANET, the method proposed in this paper thus first employs the same opportunistic routing and network coding used in MORE and also uses the location information and transmission probabilities of neighbor nodes to adapt to changeable network topology and wireless network condition. The simulation experiments showed that the proposed method can achieve efficient data delivery with low network load when the movement speed is relatively slow. PMID:24672371
Data delivery method based on neighbor nodes' information in a mobile ad hoc network.
Kashihara, Shigeru; Hayashi, Takuma; Taenaka, Yuzo; Okuda, Takeshi; Yamaguchi, Suguru
2014-01-01
This paper proposes a data delivery method based on neighbor nodes' information to achieve reliable communication in a mobile ad hoc network (MANET). In a MANET, it is difficult to deliver data reliably due to instabilities in network topology and wireless network condition which result from node movement. To overcome such unstable communication, opportunistic routing and network coding schemes have lately attracted considerable attention. Although an existing method that employs such schemes, MAC-independent opportunistic routing and encoding (MORE), Chachulski et al. (2007), improves the efficiency of data delivery in an unstable wireless mesh network, it does not address node movement. To efficiently deliver data in a MANET, the method proposed in this paper thus first employs the same opportunistic routing and network coding used in MORE and also uses the location information and transmission probabilities of neighbor nodes to adapt to changeable network topology and wireless network condition. The simulation experiments showed that the proposed method can achieve efficient data delivery with low network load when the movement speed is relatively slow.
Studies of Braille Reading Rates and Implications for the Unified English Braille Code
ERIC Educational Resources Information Center
Wetzel, Robin; Knowlton, Marie
2006-01-01
Reading rate data was collected from both print and braille readers in the areas of mathematics and literary braille. Literary braille data was collected for contracted and uncontracted braille text with dropped whole-word contractions and part-word contractions as they would appear in the Unified English Braille Code. No significant differences…
Muller, Sara; Hider, Samantha L; Raza, Karim; Stack, Rebecca J; Hayward, Richard A; Mallen, Christian D
2015-01-01
Objective Rheumatoid arthritis (RA) is a multisystem, inflammatory disorder associated with increased levels of morbidity and mortality. While much research into the condition is conducted in the secondary care setting, routinely collected primary care databases provide an important source of research data. This study aimed to update an algorithm to define RA that was previously developed and validated in the General Practice Research Database (GPRD). Methods The original algorithm consisted of two criteria. Individuals meeting at least one were considered to have RA. Criterion 1: ≥1 RA Read code and a disease modifying antirheumatic drug (DMARD) without an alternative indication. Criterion 2: ≥2 RA Read codes, with at least one ‘strong’ code and no alternative diagnoses. Lists of codes for consultations and prescriptions were obtained from the authors of the original algorithm where these were available, or compiled based on the original description and clinical knowledge. 4161 people with a first Read code for RA between 1 January 2010 and 31 December 2012 were selected from the Clinical Practice Research Datalink (CPRD, successor to the GPRD), and the criteria applied. Results Code lists were updated for the introduction of new Read codes and biological DMARDs. 3577/4161 (86%) of people met the updated algorithm for RA, compared to 61% in the original development study. 62.8% of people fulfilled both Criterion 1 and Criterion 2. Conclusions Those wishing to define RA in the CPRD, should consider using this updated algorithm, rather than a single RA code, if they wish to identify only those who are most likely to have RA. PMID:26700281
Levels of Syntactic Realization in Oral Reading.
ERIC Educational Resources Information Center
Brown, Eric
Two contrasting theories of reading are reviewed in light of recent research in psycholinguistics. A strictly "visual" model of fluent reading is contrasted with several mediational theories where auditory or articulatory coding is deemed necessary for comprehension. Surveying the research in visual information processing, oral reading,…
ERIC Educational Resources Information Center
Rønberg, Louise Flensted; Petersen, Dorthe Klint
2016-01-01
This study explores the incidence of poor comprehenders, that is, children identified as having reading comprehension difficulties, despite age-appropriate word reading skills. It supports the findings that some children do show poor reading comprehension, despite age-appropriate word reading, as measured with a phonological coding test. However,…
ERIC Educational Resources Information Center
Shumack, Kellie A.; Reilly, Erin; Chamberlain, Nik
2013-01-01
space, has error-correction capacity, and can be read from any direction. These codes are used in manufacturing, shipping, and marketing, as well as in education. QR codes can be created to produce…
Elbro, C; Nielsen, I; Petersen, D K
1994-01-01
Difficulties in reading and language skills which persist from childhood into adult life are the concerns of this article. The aims were twofold: (1) to find measures of adult reading processes that validate adults' retrospective reports of difficulties in learning to read during the school years, and (2) to search for indications of basic deficits in phonological processing that may point toward underlying causes of reading difficulties. Adults who reported a history of difficulties in learning to read (n=102) were distinctly disabled in phonological coding in reading, compared to adults without similar histories (n=56). They were less disabled in the comprehension of written passages, and the comprehension disability was explained by the phonological difficulties. A number of indications were found that adults with poor phonological coding skills in reading (i.e., dyslexia) have basic deficits in phonological representations of spoken words, even when semantic word knowledge, phonemic awareness, educational level, and daily reading habits are taken into account. It is suggested that dyslexics possess less distinct phonological representations of spoken words.
Understanding and Evaluating English Learners' Oral Reading with Miscue Analysis
ERIC Educational Resources Information Center
Latham Keh, Melissa
2017-01-01
Miscue analysis provides a unique opportunity to explore English learners' (ELs') oral reading from an asset-based perspective. This article focuses on insights about eight adolescent ELs' oral reading patterns that were gained through miscue analysis. The participants' miscues were coded with the Reading Miscue Inventory, and participants were…
Self-complementary circular codes in coding theory.
Fimmel, Elena; Michel, Christian J; Starman, Martin; Strüngmann, Lutz
2018-04-01
Self-complementary circular codes are involved in pairing genetic processes. A maximal [Formula: see text] self-complementary circular code X of trinucleotides was identified in genes of bacteria, archaea, eukaryotes, plasmids and viruses (Michel in Life 7(20):1-16 2017, J Theor Biol 380:156-177, 2015; Arquès and Michel in J Theor Biol 182:45-58 1996). In this paper, self-complementary circular codes are investigated using the graph theory approach recently formulated in Fimmel et al. (Philos Trans R Soc A 374:20150058, 2016). A directed graph [Formula: see text] associated with any code X mirrors the properties of the code. In the present paper, we demonstrate a necessary condition for the self-complementarity of an arbitrary code X in terms of the graph theory. The same condition has been proven to be sufficient for codes which are circular and of large size [Formula: see text] trinucleotides, in particular for maximal circular codes ([Formula: see text] trinucleotides). For codes of small-size [Formula: see text] trinucleotides, some very rare counterexamples have been constructed. Furthermore, the length and the structure of the longest paths in the graphs associated with the self-complementary circular codes are investigated. It has been proven that the longest paths in such graphs determine the reading frame for the self-complementary circular codes. By applying this result, the reading frame in any arbitrary sequence of trinucleotides is retrieved after at most 15 nucleotides, i.e., 5 consecutive trinucleotides, from the circular code X identified in genes. Thus, an X motif of a length of at least 15 nucleotides in an arbitrary sequence of trinucleotides (not necessarily all of them belonging to X) uniquely defines the reading (correct) frame, an important criterion for analyzing the X motifs in genes in the future.
Deaf Children's Use of Phonological Coding: Evidence from Reading, Spelling, and Working Memory
ERIC Educational Resources Information Center
Harris, Margaret; Moreno, Constanza
2004-01-01
Two groups of deaf children, aged 8 and 14 years, were presented with a number of tasks designed to assess their reliance on phonological coding. Their performance was compared with that of hearing children of the same chronological age (CA) and reading age (RA). Performance on the first task, short-term recall of pictures, showed that the deaf…
Reading through a Noisy Channel: Why There's Nothing Special about the Perception of Orthography
ERIC Educational Resources Information Center
Norris, Dennis; Kinoshita, Sachiko
2012-01-01
The goal of research on how letter identity and order are perceived during reading is often characterized as one of "cracking the orthographic code." Here, we suggest that there is no orthographic code to crack: Words are perceived and represented as sequences of letters, just as in a dictionary. Indeed, words are perceived and represented in…
Pirates at Parties: Letter Position Processing in Developing Readers
ERIC Educational Resources Information Center
Kohnen, Saskia; Castles, Anne
2013-01-01
There has been much recent interest in letter position coding in adults, but little is known about the development of this process in children learning to read. Here, the letter position coding abilities of 127 children in Grades 2, 3, and 4 (aged 7-10 years) were examined by comparing their performance in reading aloud "migratable" words (e.g.,…
ERIC Educational Resources Information Center
Mayberry, Rachel I.; del Giudice, Alex A.; Lieberman, Amy M.
2011-01-01
The relation between reading ability and phonological coding and awareness (PCA) skills in individuals who are severely and profoundly deaf was investigated with a meta-analysis. From an initial set of 230 relevant publications, 57 studies were analyzed that experimentally tested PCA skills in 2,078 deaf participants. Half of the studies found…
ERIC Educational Resources Information Center
Mascareño, Mayra; Deunk, Marjolein I.; Snow, Catherine E.; Bosker, Roel J.
2017-01-01
The aim of the study was to explore teacher-child interaction in 24 whole-class read-aloud sessions in Chilean kindergarten classrooms serving children from low socioeconomic backgrounds. Fifteen sessions focused on story meaning, and nine focused on language coding/decoding. We coded teacher and child turns for their function (i.e., teacher…
ERIC Educational Resources Information Center
Meyer, Linda A.; And Others
This manual describes the model--specifically the observation procedures and coding systems--used in a longitudinal study of how children learn to comprehend what they read, with particular emphasis on science texts. Included are procedures for the following: identifying students; observing--recording observations and diagraming the room; writing…
Method and apparatus for reading lased bar codes on shiny-finished fuel rod cladding tubes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldenfield, M.P.; Lambert, D.V.
1990-10-02
This patent describes, in a nuclear fuel rod identification system, a method of reading a bar code etched directly on a surface of a nuclear fuel rod. It comprises: defining a pair of light diffuser surfaces adjacent one another but in oppositely inclined relation to a beam of light emitted from a light reader; positioning a fuel rod, having a cylindrical surface portion with a bar code etched directly thereon, relative to the light diffuser surfaces such that the surfaces are disposed adjacent to and in oppositely inclined relation along opposite sides of the fuel rod surface portion and themore » fuel rod surface portion is aligned with the beam of light emitted from the light reader; directing the beam of light on the bar code on fuel rod cylindrical surface portion such that the light is reflected therefrom onto one of the light diffuser surfaces; and receiving and reading the reflected light from the bar code via the one of the light diffuser surfaces to the light reader.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-08
... related to the 7th Session of the AFTF will be accessible via the World Wide Web at the following address... World Health Organization (WHO). Through adoption of food standards, codes of practice, and other... animals. The guidelines should include specific science-based risk assessment criteria to apply to feed...
2004-01-08
Research at NASA's Marshall Space Flight Center has resulted in a system for reading hidden identification codes using a hand-held magnetic scanner. It's an invention that could help businesses improve inventory management, enhance safety, improve security, and aid in recall efforts if defects are discovered. Two-dimensional Data Matrix symbols consisting of letters and numbers permanently etched on items for identification and resembling a small checkerboard pattern are more efficient and reliable than traditional bar codes, and can store up to 100 times more information. A team led by Fred Schramm of the Marshall Center's Technology Transfer Department, in partnership with PRI,Torrance, California, has developed a hand-held device that can read this special type of coded symbols, even if covered by up to six layers of paint. Before this new technology was available, matrix symbols were read with optical scanners, and only if the codes were visible. This latest improvement in digital Data Matrix technologies offers greater flexibility for businesses and industries already using the marking system. Paint, inks, and pastes containing magnetic properties are applied in matrix symbol patterns to objects with two-dimensional codes, and the codes are read by a magnetic scanner, even after being covered with paint or other coatings. The ability to read hidden matrix symbols promises a wide range of benefits in a number of fields, including airlines, electronics, healthcare, and the automotive industry. Many industries would like to hide information on a part, so it can be read only by the party who put it there. For instance, the automotive industry uses direct parts marking for inventory control, but for aesthetic purposes the marks often need to be invisible. Symbols have been applied to a variety of materials, including metal, plastic, glass, paper, fabric and foam, on everything from electronic parts to pharmaceuticals to livestock. The portability of the hand-held scanner makes work faster and easier. It reads marks in darkness, under bright light that might interfere with optical reading of visible marks, and can detect symbols obscured by discoloration or contamination. Through a license with NASA, another partner, Robotic Vision Systems, Inc., of Nashua, New Hampshire, will sell the scanner on the commercial market. NASA continues to seek additional companies to license the product. Joint efforts by Marshall researchers and industry partners are aimed at improving dentification technology as part of NASA's program to better life on Earth through technology designed for the space program. In this photo, Don Roxby, Robotic Vision Systems, Inc., (left)demonstrates the magnetic handheld scanner for Fred Schramm, (Right) MSFC Technology Transfer Department.
A Dual Coding Theoretical Model of Decoding in Reading: Subsuming the LaBerge and Samuels Model
ERIC Educational Resources Information Center
Sadoski, Mark; McTigue, Erin M.; Paivio, Allan
2012-01-01
In this article we present a detailed Dual Coding Theory (DCT) model of decoding. The DCT model reinterprets and subsumes The LaBerge and Samuels (1974) model of the reading process which has served well to account for decoding behaviors and the processes that underlie them. However, the LaBerge and Samuels model has had little to say about…
ERIC Educational Resources Information Center
Davies, Robert; Rodriguez-Ferreiro, Javier; Suarez, Paz; Cuetos, Fernando
2013-01-01
In an opaque orthography like English, phonological coding errors are a prominent feature of dyslexia. In a transparent orthography like Spanish, reading difficulties are characterized by slower reading speed rather than reduced accuracy. In previous research, the reading speed deficit was revealed by asking children to read lists of words.…
The design of the CMOS wireless bar code scanner applying optical system based on ZigBee
NASA Astrophysics Data System (ADS)
Chen, Yuelin; Peng, Jian
2008-03-01
The traditional bar code scanner is influenced by the length of data line, but the farthest distance of the wireless bar code scanner of wireless communication is generally between 30m and 100m on the market. By rebuilding the traditional CCD optical bar code scanner, a CMOS code scanner is designed based on the ZigBee to meet the demands of market. The scan system consists of the CMOS image sensor and embedded chip S3C2401X, when the two dimensional bar code is read, the results show the inaccurate and wrong code bar, resulted from image defile, disturber, reads image condition badness, signal interference, unstable system voltage. So we put forward the method which uses the matrix evaluation and Read-Solomon arithmetic to solve them. In order to construct the whole wireless optics of bar code system and to ensure its ability of transmitting bar code image signals digitally with long distances, ZigBee is used to transmit data to the base station, and this module is designed based on image acquisition system, and at last the wireless transmitting/receiving CC2430 module circuit linking chart is established. And by transplanting the embedded RTOS system LINUX to the MCU, an applying wireless CMOS optics bar code scanner and multi-task system is constructed. Finally, performance of communication is tested by evaluation software Smart RF. In broad space, every ZIGBEE node can realize 50m transmission with high reliability. When adding more ZigBee nodes, the transmission distance can be several thousands of meters long.
[Challenges to implementation of the ECG reading center in ELSA-Brasil].
Ribeiro, Antonio Luiz; Pereira, Samuel Vianney da Cunha; Bergmann, Kaiser; Ladeira, Roberto Marini; Oliveira, Rackel Aguiar Mendes; Lotufo, Paulo A; Mill, José Geraldo; Barreto, Sandhi Maria
2013-06-01
Electrocardiography is an established low-cost method of cardiovascular assessment, utilized for decades large epidemiological studies. Nonetheless, its use in large epidemiological studies presents challenges, especially when seeking to develop a reading center. This article describes the process, difficulties and challenges of implementing an electrocardiogram reading center in Brazilian Longitudinal Study for Adult Health (ELSA-Brasil). Among the issues discussed, we have emphasized: the criteria for selection of the electrocardiography machine and the central for storage and management of the machines; the required personnel; the procedures for acquisition and transmission of electrocardiographs to the Reading Center; coding systems, with emphasis on the Minnesota code; ethical and practical issues regarding the delivery of reports to study participants; and aspects related to quality control.
Taxonomic and ad hoc categorization within the two cerebral hemispheres.
Shen, Yeshayahu; Aharoni, Bat-El; Mashal, Nira
2015-01-01
A typicality effect refers to categorization which is performed more quickly or more accurately for typical than for atypical members of a given category. Previous studies reported a typicality effect for category members presented in the left visual field/right hemisphere (RH), suggesting that the RH applies a similarity-based categorization strategy. However, findings regarding the typicality effect within the left hemisphere (LH) are less conclusive. The current study tested the pattern of typicality effects within each hemisphere for both taxonomic and ad hoc categories, using words presented to the left or right visual fields. Experiment 1 tested typical and atypical members of taxonomic categories as well as non-members, and Experiment 2 tested typical and atypical members of ad hoc categories as well as non-members. The results revealed a typicality effect in both hemispheres and in both types of categories. Furthermore, the RH categorized atypical stimuli more accurately than did the LH. Our findings suggest that both hemispheres rely on a similarity-based categorization strategy, but the coarse semantic coding of the RH seems to facilitate the categorization of atypical members.
Working Memory Influences Processing Speed and Reading Fluency in ADHD
Jacobson, Lisa A.; Ryan, Matthew; Martin, Rebecca B.; Ewen, Joshua; Mostofsky, Stewart H.; Denckla, Martha B.; Mahone, E. Mark
2012-01-01
Processing speed deficits affect reading efficiency, even among individuals who recognize and decode words accurately. Children with ADHD who decode words accurately can still have inefficient reading fluency, leading to a bottleneck in other cognitive processes. This “slowing” in ADHD is associated with deficits in fundamental components of executive function underlying processing speed, including response selection. The purpose of the present study was to deconstruct processing speed in order to determine which components of executive control best explain the “processing” speed deficits related to reading fluency in ADHD. Participants (41 ADHD, 21 controls), ages 9-14, screened for language disorders, word reading deficits, and psychiatric disorders, were administered measures of copying speed, processing speed, reading fluency, working memory, reaction time, inhibition, and auditory attention span. Compared to controls, children with ADHD showed reduced oral and silent reading fluency, and reduced processing speed—driven primarily by deficits on WISC-IV Coding. In contrast, groups did not differ on copying speed. After controlling for copying speed, sex, severity of ADHD-related symptomatology, and GAI, slowed “processing” speed (i.e., Coding) was significantly associated with verbal span and measures of working memory, but not with measures of response control/inhibition, lexical retrieval speed, reaction time, or intra-subject variability. Further, “processing” speed (i.e., Coding, residualized for copying speed) and working memory were significant predictors of oral reading fluency. Abnormalities in working memory and response selection (which are frontally-mediated and enter into the output side of processing speed) may play an important role in deficits in reading fluency in ADHD, potentially more than posteriorally-mediated problems with orienting of attention or perceiving the stimulus. PMID:21287422
Working memory influences processing speed and reading fluency in ADHD.
Jacobson, Lisa A; Ryan, Matthew; Martin, Rebecca B; Ewen, Joshua; Mostofsky, Stewart H; Denckla, Martha B; Mahone, E Mark
2011-01-01
Processing-speed deficits affect reading efficiency, even among individuals who recognize and decode words accurately. Children with ADHD who decode words accurately can still have inefficient reading fluency, leading to a bottleneck in other cognitive processes. This "slowing" in ADHD is associated with deficits in fundamental components of executive function underlying processing speed, including response selection. The purpose of the present study was to deconstruct processing speed in order to determine which components of executive control best explain the "processing" speed deficits related to reading fluency in ADHD. Participants (41 ADHD, 21 controls), ages 9-14 years, screened for language disorders, word reading deficits, and psychiatric disorders, were administered measures of copying speed, processing speed, reading fluency, working memory, reaction time, inhibition, and auditory attention span. Compared to controls, children with ADHD showed reduced oral and silent reading fluency and reduced processing speed-driven primarily by deficits on WISC-IV Coding. In contrast, groups did not differ on copying speed. After controlling for copying speed, sex, severity of ADHD-related symptomatology, and GAI, slowed "processing" speed (i.e., Coding) was significantly associated with verbal span and measures of working memory but not with measures of response control/inhibition, lexical retrieval speed, reaction time, or intrasubject variability. Further, "processing" speed (i.e., Coding, residualized for copying speed) and working memory were significant predictors of oral reading fluency. Abnormalities in working memory and response selection (which are frontally mediated and enter into the output side of processing speed) may play an important role in deficits in reading fluency in ADHD, potentially more than posteriorally mediated problems with orienting of attention or perceiving the stimulus.
Combining Static Model Checking with Dynamic Enforcement Using the Statecall Policy Language
NASA Astrophysics Data System (ADS)
Madhavapeddy, Anil
Internet protocols encapsulate a significant amount of state, making implementing the host software complex. In this paper, we define the Statecall Policy Language (SPL) which provides a usable middle ground between ad-hoc coding and formal reasoning. It enables programmers to embed automata in their code which can be statically model-checked using SPIN and dynamically enforced. The performance overheads are minimal, and the automata also provide higher-level debugging capabilities. We also describe some practical uses of SPL by describing the automata used in an SSH server written entirely in OCaml/SPL.
ERIC Educational Resources Information Center
Milchus, Norman J.
The Wayne County Pre-Reading Program for Preventing Reading Failure is an individually, diagnostically prescribed, perceptual-cognitive-linguistic development program. The program utilizes the largest compilation of prescriptively coded, reading readiness materials to be assigned prior to and concurrent with first-year reading instruction. The…
Reading Disabilities and PASS Reading Enhancement Programme
ERIC Educational Resources Information Center
Mahapatra, Shamita
2016-01-01
Children experience difficulties in reading either because they fail to decode the words and thus are unable to comprehend the text or simply fail to comprehend the text even if they are able to decode the words and read them out. Failure in word decoding results from a failure in phonological coding of written information, whereas reading…
An Information-Processing Approach to the Development of Reading for Comprehension. Final Report.
ERIC Educational Resources Information Center
Schadler, Margaret; Juola, James F.
This paper is a summary of research on the perceptual and memory processes related to reading, their developmental progress in children, and the reading abilities in adults. Reported among the results of the various studies are the following: (1) developmental changes in reading after second grade primarily improve speed of coding, (2) reading…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Katherine J; Johnson, Seth R; Prokopenko, Andrey V
'ForTrilinos' is related to The Trilinos Project, which contains a large and growing collection of solver capabilities that can utilize next-generation platforms, in particular scalable multicore, manycore, accelerator and heterogeneous systems. Trilinos is primarily written in C++, including its user interfaces. While C++ is advantageous for gaining access to the latest programming environments, it limits Trilinos usage via Fortran. Sever ad hoc translation interfaces exist to enable Fortran usage of Trilinos, but none of these interfaces is general-purpose or written for reusable and sustainable external use. 'ForTrilinos' provides a seamless pathway for large and complex Fortran-based codes to access Trilinosmore » without C/C++ interface code. This access includes Fortran versions of Kokkos abstractions for code execution and data management.« less
Automated collection and processing of environmental samples
Troyer, Gary L.; McNeece, Susan G.; Brayton, Darryl D.; Panesar, Amardip K.
1997-01-01
For monitoring an environmental parameter such as the level of nuclear radiation, at distributed sites, bar coded sample collectors are deployed and their codes are read using a portable data entry unit that also records the time of deployment. The time and collector identity are cross referenced in memory in the portable unit. Similarly, when later recovering the collector for testing, the code is again read and the time of collection is stored as indexed to the sample collector, or to a further bar code, for example as provided on a container for the sample. The identity of the operator can also be encoded and stored. After deploying and/or recovering the sample collectors, the data is transmitted to a base processor. The samples are tested, preferably using a test unit coupled to the base processor, and again the time is recorded. The base processor computes the level of radiation at the site during exposure of the sample collector, using the detected radiation level of the sample, the delay between recovery and testing, the duration of exposure and the half life of the isotopes collected. In one embodiment, an identity code and a site code are optically read by an image grabber coupled to the portable data entry unit.
QRAC-the-Code: a comprehension monitoring strategy for middle school social studies textbooks.
Berkeley, Sheri; Riccomini, Paul J
2013-01-01
Requirements for reading and ascertaining information from text increase as students advance through the educational system, especially in content-rich classes; hence, monitoring comprehension is especially important. However, this is a particularly challenging skill for many students who struggle with reading comprehension, including students with learning disabilities. A randomized pre-post experimental design was employed to investigate the effectiveness of a comprehension monitoring strategy (QRAC-the-Code) for improving the reading comprehension of 323 students in grades 6 and 7 in inclusive social studies classes. Findings indicated that both general education students and students with learning disabilities who were taught a simple comprehension monitoring strategy improved their comprehension of textbook content compared to students who read independently and noted important points. In addition, students in the comprehension monitoring condition reported using more reading strategies after the intervention. Implications for research and practice are discussed.
Paper-Based Textbooks with Audio Support for Print-Disabled Students.
Fujiyoshi, Akio; Ohsawa, Akiko; Takaira, Takuya; Tani, Yoshiaki; Fujiyoshi, Mamoru; Ota, Yuko
2015-01-01
Utilizing invisible 2-dimensional codes and digital audio players with a 2-dimensional code scanner, we developed paper-based textbooks with audio support for students with print disabilities, called "multimodal textbooks." Multimodal textbooks can be read with the combination of the two modes: "reading printed text" and "listening to the speech of the text from a digital audio player with a 2-dimensional code scanner." Since multimodal textbooks look the same as regular textbooks and the price of a digital audio player is reasonable (about 30 euro), we think multimodal textbooks are suitable for students with print disabilities in ordinary classrooms.
UGV Interoperability Profile (IOP) Communications Profile, Version 0
2011-12-21
some UGV systems employ Orthogonal Frequency Division Multiplexing ( OFDM ) or Coded Orthogonal Frequency Division Multiplexing (COFDM) waveforms which...other portions of the IOP. Attribute Paragraph Title Values Waveform 3.3 Air Interface/ Waveform OFDM , COFDM, DDL, CDL, None OCU to Platform...Sight MANET Mobile Ad-hoc Network Mbps Megabits per second MC/PM Master Controller/ Payload Manager MHz Megahertz MIMO Multiple Input Multiple
Reading Performance Profile of Children with Dyslexia in Primary and Secondary School Students
ERIC Educational Resources Information Center
Balci, Emine; Çayir, Aybala
2018-01-01
The purpose of the present research was to provide information to the community about the reading subskill profiles of children with dyslexia in primary and secondary school students. 175 children (aged 7-15 yrs) were examined on a varied set of phonological coding, spelling and fluent reading tasks. For this purpose, students' fluent reading were…
ERIC Educational Resources Information Center
Son, Seung-Hee Claire; Tineo, Maria F.
2016-01-01
This study examined associations among low-income mothers' use of attention-getting utterances during shared book reading, preschoolers' verbal engagement and visual attention to reading, and their early literacy skills (N = 51). Mother-child shared book reading sessions were videotaped and coded for each utterance, including attention talk,…
Harris, Yvette R; Rothstein, Susan E
2014-01-01
The aim of this investigation was to identify the book reading behaviors and book reading styles of middle class African American mothers engaged in a shared book reading activity with their preschool children. To this end, the mothers and their children were videotaped reading one of three books, Julius, Grandfather and I, or Somewhere in Africa. Both maternal and child behaviors were coded for the frequency of occurrence of story grammar elements contained in their stories and maternal behaviors were also coded for their use of narrative eliciting strategies. In addition, mothers were queried about the quality and quantity of book reading/story telling interactions in the home environment. The results suggest that there is a great deal of individual variation in how mothers use the story grammar elements and narrative eliciting strategies to engage their children in a shared book reading activity. Findings are discussed in terms of suggestions for additional research and practical applications are offered on ways to optimally engage African American preschool children and African American families from diverse socioeconomic backgrounds in shared book reading interactions.
Harris, Yvette R.; Rothstein, Susan E.
2014-01-01
The aim of this investigation was to identify the book reading behaviors and book reading styles of middle class African American mothers engaged in a shared book reading activity with their preschool children. To this end, the mothers and their children were videotaped reading one of three books, Julius, Grandfather and I, or Somewhere in Africa. Both maternal and child behaviors were coded for the frequency of occurrence of story grammar elements contained in their stories and maternal behaviors were also coded for their use of narrative eliciting strategies. In addition, mothers were queried about the quality and quantity of book reading/story telling interactions in the home environment. The results suggest that there is a great deal of individual variation in how mothers use the story grammar elements and narrative eliciting strategies to engage their children in a shared book reading activity. Findings are discussed in terms of suggestions for additional research and practical applications are offered on ways to optimally engage African American preschool children and African American families from diverse socioeconomic backgrounds in shared book reading interactions. PMID:24926276
Precursors of Reading Difficulties in Czech and Slovak Children At-Risk of Dyslexia.
Moll, Kristina; Thompson, Paul A; Mikulajova, Marina; Jagercikova, Zuzana; Kucharska, Anna; Franke, Helena; Hulme, Charles; Snowling, Margaret J
2016-05-01
Children with preschool language difficulties are at high risk of literacy problems; however, the nature of the relationship between delayed language development and dyslexia is not understood. Three hundred eight Slovak and Czech children were recruited into three groups: family risk of dyslexia, speech/language difficulties and controls, and were assessed three times from kindergarten until Grade 1. There was a twofold increase in probability of reading problems in each risk group. Precursors of 'dyslexia' included difficulties in oral language and code-related skills (phoneme awareness, letter-knowledge and rapid automatized naming); poor performance in phonological memory and vocabulary was observed in both affected and unaffected high-risk peers. A two-group latent variable path model shows that early language skills predict code-related skills, which in turn predict literacy skills. Findings suggest that dyslexia in Slavic languages has its origins in early language deficits, and children who succumb to reading problems show impaired code-related skills before the onset of formal reading instruction. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
What Do Beginning Special Educators Need to Know about Intensive Reading Interventions?
ERIC Educational Resources Information Center
Coyne, Michael D.; Koriakin, Taylor A.
2017-01-01
Evidence based reading instruction and intervention are essential for students with disabilities. The authors recommend that elementary special education teachers emphasize both code-based and meaning-based skills as part of delivering intensive reading interventions, including providing explicit and systematic decoding and vocabulary instruction.…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-04
...The FAA proposes to rule and invite public comment on the release of land at the Reading Regional Airport, Reading, Pennsylvania under the provisions of Section 47125(a) of Title 49 United States Code (U.S.C.).
Computational Fluids Domain Reduction to a Simplified Fluid Network
2012-04-19
readily available read/ write software library. Code components from the open source projects OpenFoam and Paraview were explored for their adaptability...to the project. Both Paraview and OpenFoam read polyhedral mesh. OpenFoam does not read results data. Paraview actually allows for user “filters
Rcount: simple and flexible RNA-Seq read counting.
Schmid, Marc W; Grossniklaus, Ueli
2015-02-01
Analysis of differential gene expression by RNA sequencing (RNA-Seq) is frequently done using feature counts, i.e. the number of reads mapping to a gene. However, commonly used count algorithms (e.g. HTSeq) do not address the problem of reads aligning with multiple locations in the genome (multireads) or reads aligning with positions where two or more genes overlap (ambiguous reads). Rcount specifically addresses these issues. Furthermore, Rcount allows the user to assign priorities to certain feature types (e.g. higher priority for protein-coding genes compared to rRNA-coding genes) or to add flanking regions. Rcount provides a fast and easy-to-use graphical user interface requiring no command line or programming skills. It is implemented in C++ using the SeqAn (www.seqan.de) and the Qt libraries (qt-project.org). Source code and 64 bit binaries for (Ubuntu) Linux, Windows (7) and MacOSX are released under the GPLv3 license and are freely available on github.com/MWSchmid/Rcount. marcschmid@gmx.ch Test data, genome annotation files, useful Python and R scripts and a step-by-step user guide (including run-time and memory usage tests) are available on github.com/MWSchmid/Rcount. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
“Down the Language Rabbit Hole with Alice”: A Case Study of a Deaf Girl with a Cochlear Implant
Andrews, Jean F.; Dionne, Vickie
2011-01-01
Alice, a deaf girl who was implanted after age three years of age was exposed to four weeks of storybook sessions conducted in American Sign Language (ASL) and speech (English). Two research questions were address: (1) how did she use her sign bimodal/bilingualism, codeswitching, and code mixing during reading activities and (2) what sign bilingual code-switching and code-mixing strategies did she use while attending to stories delivered under two treatments: ASL only and speech only. Retelling scores were collected to determine the type and frequency of her codeswitching/codemixing strategies between both languages after Alice was read to a story in ASL and in spoken English. Qualitative descriptive methods were utilized. Teacher, clinician and student transcripts of the reading and retelling sessions were recorded. Results showed Alice frequently used codeswitching and codeswitching strategies while retelling the stories retold under both treatments. Alice increased in her speech production retellings of the stories under both the ASL storyreading and spoken English-only reading of the story. The ASL storyreading did not decrease Alice's retelling scores in spoken English. Professionals are encouraged to consider the benefits of early sign bimodal/bilingualism to enhance the overall speech, language and reading proficiency of deaf children with cochlear implants. PMID:22135677
Digital Data Matrix Scanner Developnent At Marshall Space Flight Center
NASA Technical Reports Server (NTRS)
2004-01-01
Research at NASA's Marshall Space Flight Center has resulted in a system for reading hidden identification codes using a hand-held magnetic scanner. It's an invention that could help businesses improve inventory management, enhance safety, improve security, and aid in recall efforts if defects are discovered. Two-dimensional Data Matrix symbols consisting of letters and numbers permanently etched on items for identification and resembling a small checkerboard pattern are more efficient and reliable than traditional bar codes, and can store up to 100 times more information. A team led by Fred Schramm of the Marshall Center's Technology Transfer Department, in partnership with PRI,Torrance, California, has developed a hand-held device that can read this special type of coded symbols, even if covered by up to six layers of paint. Before this new technology was available, matrix symbols were read with optical scanners, and only if the codes were visible. This latest improvement in digital Data Matrix technologies offers greater flexibility for businesses and industries already using the marking system. Paint, inks, and pastes containing magnetic properties are applied in matrix symbol patterns to objects with two-dimensional codes, and the codes are read by a magnetic scanner, even after being covered with paint or other coatings. The ability to read hidden matrix symbols promises a wide range of benefits in a number of fields, including airlines, electronics, healthcare, and the automotive industry. Many industries would like to hide information on a part, so it can be read only by the party who put it there. For instance, the automotive industry uses direct parts marking for inventory control, but for aesthetic purposes the marks often need to be invisible. Symbols have been applied to a variety of materials, including metal, plastic, glass, paper, fabric and foam, on everything from electronic parts to pharmaceuticals to livestock. The portability of the hand-held scanner makes work faster and easier. It reads marks in darkness, under bright light that might interfere with optical reading of visible marks, and can detect symbols obscured by discoloration or contamination. Through a license with NASA, another partner, Robotic Vision Systems, Inc., of Nashua, New Hampshire, will sell the scanner on the commercial market. NASA continues to seek additional companies to license the product. Joint efforts by Marshall researchers and industry partners are aimed at improving dentification technology as part of NASA's program to better life on Earth through technology designed for the space program. In this photo, Don Roxby, Robotic Vision Systems, Inc., (left)demonstrates the magnetic handheld scanner for Fred Schramm, (Right) MSFC Technology Transfer Department.
The Simple Video Coder: A free tool for efficiently coding social video data.
Barto, Daniel; Bird, Clark W; Hamilton, Derek A; Fink, Brandi C
2017-08-01
Videotaping of experimental sessions is a common practice across many disciplines of psychology, ranging from clinical therapy, to developmental science, to animal research. Audio-visual data are a rich source of information that can be easily recorded; however, analysis of the recordings presents a major obstacle to project completion. Coding behavior is time-consuming and often requires ad-hoc training of a student coder. In addition, existing software is either prohibitively expensive or cumbersome, which leaves researchers with inadequate tools to quickly process video data. We offer the Simple Video Coder-free, open-source software for behavior coding that is flexible in accommodating different experimental designs, is intuitive for students to use, and produces outcome measures of event timing, frequency, and duration. Finally, the software also offers extraction tools to splice video into coded segments suitable for training future human coders or for use as input for pattern classification algorithms.
Experimental QR code optical encryption: noise-free data recovering.
Barrera, John Fredy; Mira-Agudelo, Alejandro; Torroba, Roberto
2014-05-15
We report, to our knowledge for the first time, the experimental implementation of a quick response (QR) code as a "container" in an optical encryption system. A joint transform correlator architecture in an interferometric configuration is chosen as the experimental scheme. As the implementation is not possible in a single step, a multiplexing procedure to encrypt the QR code of the original information is applied. Once the QR code is correctly decrypted, the speckle noise present in the recovered QR code is eliminated by a simple digital procedure. Finally, the original information is retrieved completely free of any kind of degradation after reading the QR code. Additionally, we propose and implement a new protocol in which the reception of the encrypted QR code and its decryption, the digital block processing, and the reading of the decrypted QR code are performed employing only one device (smartphone, tablet, or computer). The overall method probes to produce an outcome far more attractive to make the adoption of the technique a plausible option. Experimental results are presented to demonstrate the practicality of the proposed security system.
Is Reading Instruction Evidence-Based? Analyzing Teaching Practices Using T-Patterns.
Suárez, Natalia; Sánchez, Carmen R; Jiménez, Juan E; Anguera, M Teresa
2018-01-01
The main goal of this study was to analyze whether primary teachers use evidence-based reading instruction for primary-grade readers. The study sample consisted of six teachers whose teaching was recorded. The observation instrument used was developed ad hoc for this study. The recording instrument used was Match Vision Studio. The data analysis was performed using SAS, GT version 2.0 E, and THEME. The results indicated that the teaching practices used most frequently and for the longest duration were: feedback (i.e., correcting the student when reading); fluency (i.e., individual and group reading, both out loud and silently, with and without intonation); literal or inference comprehension exercises (i.e., summarizing, asking questions); and use of educational resources (i.e., stories, songs, poems). Later, we conducted analyses of T-Patterns that showed the sequence of instruction in detail. We can conclude that <50% of the teaching practices used by the majority of teachers were based on the recommendations of the National Reading Panel (NRP). Only one teacher followed best practices. The same was the case for instructional time spent on the five essential components of reading, with the exception of teacher E., who dedicated 70.31% of class time implementing best practices. Teaching practices (i.e., learners' activities) designed and implemented to exercise and master alphabetic knowledge and phonological awareness skills were used less frequently in the classroom.
Is Reading Instruction Evidence-Based? Analyzing Teaching Practices Using T-Patterns
Suárez, Natalia; Sánchez, Carmen R.; Jiménez, Juan E.; Anguera, M. Teresa
2018-01-01
The main goal of this study was to analyze whether primary teachers use evidence-based reading instruction for primary-grade readers. The study sample consisted of six teachers whose teaching was recorded. The observation instrument used was developed ad hoc for this study. The recording instrument used was Match Vision Studio. The data analysis was performed using SAS, GT version 2.0 E, and THEME. The results indicated that the teaching practices used most frequently and for the longest duration were: feedback (i.e., correcting the student when reading); fluency (i.e., individual and group reading, both out loud and silently, with and without intonation); literal or inference comprehension exercises (i.e., summarizing, asking questions); and use of educational resources (i.e., stories, songs, poems). Later, we conducted analyses of T-Patterns that showed the sequence of instruction in detail. We can conclude that <50% of the teaching practices used by the majority of teachers were based on the recommendations of the National Reading Panel (NRP). Only one teacher followed best practices. The same was the case for instructional time spent on the five essential components of reading, with the exception of teacher E., who dedicated 70.31% of class time implementing best practices. Teaching practices (i.e., learners' activities) designed and implemented to exercise and master alphabetic knowledge and phonological awareness skills were used less frequently in the classroom. PMID:29449818
ERIC Educational Resources Information Center
Kovelman, Ioulia; Salah-Ud-Din, Maha; Berens, Melody S.; Petitto, Laura-Ann
2015-01-01
In teaching reading, educators strive to find the balance between a code-emphasis approach and a meaning-oriented literacy approach. However, little is known about how different approaches to literacy can benefit bilingual children's early reading acquisition. To investigate the novel hypothesis that children's age of first bilingual exposure can…
Easy-to-Read Informed Consent Forms for Hematopoietic Cell Transplantation Clinical Trials
Denzen, Ellen M; Santibáñez, Martha E Burton; Moore, Heather; Foley, Amy; Gersten, Iris D; Gurgol, Cathy; Majhail, Navneet S; Spellecy, Ryan; Horowitz, Mary M; Murphy, Elizabeth A
2011-01-01
Informed consent is essential to ethical research and is requisite to participation in clinical research. Yet most hematopoietic cell transplantation (HCT) informed consent forms (ICFs) are written at reading levels that are above the ability of the average person in the US. The recent development of ICF templates by the National Cancer Institute, National Institutes of Health and the National Heart Blood and Lung Instituthas not resulted in increased patient comprehension of information. Barriers to creating Easy-to-Read ICFs that meet US federal requirements and pass Institutional Review Board (IRB) review are the result of multiple interconnected factors. The Blood and Marrow Transplant Clinical Trials Network (BMT CTN) formed an ad hoc review team to address concerns regarding the overall readability and length of ICFs used for BMT CTN trials. This paper summarizes recommendations of the review team for the development and formatting of Easy-to-Read ICFs for HCT multicenter clinical trials, the most novel of which is the use of a two-column layout. These recommendations intend to guide the ICF writing process, simplify local IRB review of the ICF, enhance patient comprehension and improve patient satisfaction. The BMT CTN plans to evaluate the impact of the Easy-to-Read format compared to the traditional format on the informed consent process. PMID:21806948
Health Research Ethics: Between Ethics Codes and Culture.
Gheondea-Eladi, Alexandra
2017-10-01
This article is meant to describe and analyze some of the ethical difficulties encountered in a pilot research on treatment decisions of patients with chronic viral hepatitis C infection in Romania. It departs from an overview of the main ethics codes, and it shows that social health research on patients falls in between institutional codes of ethics. Furthermore, the article moves on to analyze so-called "important moments" of empirical research, such as the implementation of the ethical protocol, dealing with informal payments and with information on shady actions, as well as requests of information from interviewed patients and deciding when and if to breach confidentiality. In an attempt to evaluate the ad hoc solutions found in the field, the concluding remarks discuss these issues at the threshold of theory and practice.
Simultaneous isoform discovery and quantification from RNA-seq.
Hiller, David; Wong, Wing Hung
2013-05-01
RNA sequencing is a recent technology which has seen an explosion of methods addressing all levels of analysis, from read mapping to transcript assembly to differential expression modeling. In particular the discovery of isoforms at the transcript assembly stage is a complex problem and current approaches suffer from various limitations. For instance, many approaches use graphs to construct a minimal set of isoforms which covers the observed reads, then perform a separate algorithm to quantify the isoforms, which can result in a loss of power. Current methods also use ad-hoc solutions to deal with the vast number of possible isoforms which can be constructed from a given set of reads. Finally, while the need of taking into account features such as read pairing and sampling rate of reads has been acknowledged, most existing methods do not seamlessly integrate these features as part of the model. We present Montebello, an integrated statistical approach which performs simultaneous isoform discovery and quantification by using a Monte Carlo simulation to find the most likely isoform composition leading to a set of observed reads. We compare Montebello to Cufflinks, a popular isoform discovery approach, on a simulated data set and on 46.3 million brain reads from an Illumina tissue panel. On this data set Montebello appears to offer a modest improvement over Cufflinks when considering discovery and parsimony metrics. In addition Montebello mitigates specific difficulties inherent in the Cufflinks approach. Finally, Montebello can be fine-tuned depending on the type of solution desired.
RF Characteristics of Mica-Z Wireless Sensor Network Motes
2006-03-01
MICA-Z WIRELESS SENSOR NETWORK MOTES by Swee Jin Koh March 2006 Thesis Advisor: Gurminder Singh Thesis Co-Advisor: John C...Mica-Z Wireless Sensor Network Motes 6. AUTHOR(S) : Swee Jin Koh 5. FUNDING NUMBERS 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval...ad-hoc deployment. 15. NUMBER OF PAGES 83 14. SUBJECT TERMS: Wireless Sensor Network 16. PRICE CODE 17. SECURITY CLASSIFICATION OF
Nouraei, S A R; O'Hanlon, S; Butler, C R; Hadovsky, A; Donald, E; Benjamin, E; Sandhu, G S
2009-02-01
To audit the accuracy of otolaryngology clinical coding and identify ways of improving it. Prospective multidisciplinary audit, using the 'national standard clinical coding audit' methodology supplemented by 'double-reading and arbitration'. Teaching-hospital otolaryngology and clinical coding departments. Otolaryngology inpatient and day-surgery cases. Concordance between initial coding performed by a coder (first cycle) and final coding by a clinician-coder multidisciplinary team (MDT; second cycle) for primary and secondary diagnoses and procedures, and Health Resource Groupings (HRG) assignment. 1250 randomly-selected cases were studied. Coding errors occurred in 24.1% of cases (301/1250). The clinician-coder MDT reassigned 48 primary diagnoses and 186 primary procedures and identified a further 209 initially-missed secondary diagnoses and procedures. In 203 cases, patient's initial HRG changed. Incorrect coding caused an average revenue loss of 174.90 pounds per patient (14.7%) of which 60% of the total income variance was due to miscoding of a eight highly-complex head and neck cancer cases. The 'HRG drift' created the appearance of disproportionate resource utilisation when treating 'simple' cases. At our institution the total cost of maintaining a clinician-coder MDT was 4.8 times lower than the income regained through the double-reading process. This large audit of otolaryngology practice identifies a large degree of error in coding on discharge. This leads to significant loss of departmental revenue, and given that the same data is used for benchmarking and for making decisions about resource allocation, it distorts the picture of clinical practice. These can be rectified through implementing a cost-effective clinician-coder double-reading multidisciplinary team as part of a data-assurance clinical governance framework which we recommend should be established in hospitals.
Peterson, Robin L; Pennington, Bruce F; Olson, Richard K
2013-01-01
We investigated the phonological and surface subtypes of developmental dyslexia in light of competing predictions made by two computational models of single word reading, the Dual-Route Cascaded Model (DRC; Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001) and Harm and Seidenberg's connectionist model (HS model; Harm & Seidenberg, 1999). The regression-outlier procedure was applied to a large sample to identify children with disproportionately poor phonological coding skills (phonological dyslexia) or disproportionately poor orthographic coding skills (surface dyslexia). Consistent with the predictions of the HS model, children with "pure" phonological dyslexia, who did not have orthographic deficits, had milder phonological impairments than children with "relative" phonological dyslexia, who did have secondary orthographic deficits. In addition, pure cases of dyslexia were more common among older children. Consistent with the predictions of the DRC model, surface dyslexia was not well conceptualized as a reading delay; both phonological and surface dyslexia were associated with patterns of developmental deviance. In addition, some results were problematic for both models. We identified a small number of individuals with severe phonological dyslexia, relatively intact orthographic coding skills, and very poor real word reading. Further, a subset of controls could read normally despite impaired orthographic coding. The findings are discussed in terms of improvements to both models that might help better account for all cases of developmental dyslexia. Copyright © 2012 Elsevier B.V. All rights reserved.
Peterson, Robin L.; Pennington, Bruce F.; Olson, Richard K.
2012-01-01
We investigated the phonological and surface subtypes of developmental dyslexia in light of competing predictions made by two computational models of single word reading, the dual-route cascaded model (DRC; Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001) and Harm and Seidenberg’s connectionist model (HS model; Harm & Seidenberg, 1999). The regression-outlier procedure was applied to a large sample to identify children with disproportionately poor phonological coding skills (phonological dyslexia) or disproportionately poor orthographic coding skills (surface dyslexia). Consistent with the predictions of the HS model, children with “pure” phonological dyslexia, who did not have orthographic deficits, had milder phonological impairments than children with “relative” phonological dyslexia, who did have secondary orthographic deficits. In addition, pure cases of dyslexia were more common among older children. Consistent with the predictions of the DRC model, surface dyslexia was not well conceptualized as a reading delay; both phonological and surface dyslexia were associated with patterns of developmental deviance. In addition, some results were problematic for both models. We identified a small number of individuals with severe phonological dyslexia, relatively intact orthographic coding skills, and very poor real word reading. Further, a subset of controls could read normally despite impaired orthographic coding. The findings are discussed in terms of improvements to both models that might help better account for all cases of developmental dyslexia. PMID:23010562
Architectural Coatings: National Volatile Organic Compounds Emission Standards
Read about the section 183(e) rule for volatile organic compounds for architectural coatings. Read the rule summary and history, find the code of federal regulations test, and additional documents, including compliance information.
ERIC Educational Resources Information Center
Bratsch-Hines, Mary E.; Vernon-Feagans, Lynne; Varghese, Cheryl; Garwood, Justin
2017-01-01
This study explored the extent to which kindergarten and first grade teachers provided individualized reading instruction to struggling readers during a unique one-on-one reading instruction task. Three outcomes of teachers' instructional strategies were captured: code-focused strategies, meaning-focused strategies, and level of challenge. Child…
ERIC Educational Resources Information Center
O'Halloran, Kieran
2011-01-01
This article makes a contribution to understanding informal argumentation by focusing on the discourse of reading groups. Reading groups, an important cultural phenomenon in Britain and other countries, regularly meet in members' houses, in pubs or restaurants, in bookshops, workplaces, schools or prisons to share their experiences of reading…
Cognitive Training and Reading Remediation
ERIC Educational Resources Information Center
Mahapatra, Shamita
2015-01-01
Reading difficulties are experienced by children either because they fail to decode the words and thus are unable to comprehend the text or simply fail to comprehend the text even if they are able to decode the words and read them out. Failure in word decoding results from a failure in phonological coding of written information, whereas, reading…
Shared Book Reading and English Learners' Narrative Production and Comprehension
ERIC Educational Resources Information Center
Gámez, Perla B.; González, Dahlia; Urbin, LaNette M.
2017-01-01
This study examined the relation between exposure to shared book reading and Spanish-speaking English learners' (ELs'; n = 102) narrative production and comprehension skills in kindergarten (mean age = 6.12 years). Audio- and videotaped book-reading sessions in Spanish were coded in terms of teachers' extratextual talk and gestures. Using a silent…
Coding System for the First Grade Reading Group Study.
ERIC Educational Resources Information Center
Brophy, Jere; And Others
The First-Grade Reading Group Study is an experimental examination of teaching behaviors and their effects in first-grade reading groups. The specific teaching behaviors of interest are defined by a model for small group instruction which describes organization and management of the class, and ways of responding to children's answers that are…
Al Otaiba, Stephanie; Lake, Vickie E; Greulich, Luana; Folsom, Jessica S; Guidry, Lisa
2012-01-01
This randomized-control trial examined the learning of preservice teachers taking an initial Early Literacy course in an early childhood education program and of the kindergarten or first grade students they tutored in their field experience. Preservice teachers were randomly assigned to one of two tutoring programs: Book Buddies and Tutor Assisted Intensive Learning Strategies (TAILS), which provided identical meaning-focused instruction (shared book reading), but differed in the presentation of code-focused skills. TAILS used explicit, scripted lessons, and the Book Buddies required that code-focused instruction take place during shared book reading. Our research goal was to understand which tutoring program would be most effective in improving knowledge about reading, lead to broad and deep language and preparedness of the novice preservice teachers, and yield the most successful student reading outcomes. Findings indicate that all pre-service teachers demonstrated similar gains in knowledge, but preservice teachers in the TAILS program demonstrated broader and deeper application of knowledge and higher self-ratings of preparedness to teach reading. Students in both conditions made similar comprehension gains, but students tutored with TAILS showed significantly stronger decoding gains.
Implementing MANETS in Android based environment using Wi-Fi direct
NASA Astrophysics Data System (ADS)
Waqas, Muhammad; Babar, Mohammad Inayatullah Khan; Zafar, Mohammad Haseeb
2015-05-01
Packet loss occurs in real-time voice transmission over wireless broadcast Ad-hoc network which creates disruptions in sound. Basic objective of this research is to design a wireless Ad-hoc network based on two Android devices by using the Wireless Fidelity (WIFI) Direct Application Programming Interface (API) and apply the Network Codec, Reed Solomon Code. The network codec is used to encode the data of a music wav file and recover the lost packets if any, packets are dropped using a loss module at the transmitter device to analyze the performance with the objective of retrieving the original file at the receiver device using the network codec. This resulted in faster transmission of the files despite dropped packets. In the end both files had the original formatted music files with complete performance analysis based on the transmission delay.
2013-01-01
Background Estimates of the prevalence of irritable bowel syndrome (IBS) vary widely, and a large proportion of patients report having consulted their general practitioner (GP). In patients with new onset gastrointestinal symptoms in primary care it might be possible to predict those at risk of persistent symptoms. However, one of the difficulties is identifying patients within primary care. GPs use a variety of Read Codes to describe patients presenting with IBS. Furthermore, in a qualitative study, exploring GPs’ attitudes and approaches to defining patients with IBS, GPs appeared reluctant to add the IBS Read Code to the patient record until more serious conditions were ruled out. Consequently, symptom codes such as 'abdominal pain’, 'diarrhoea’ or 'constipation’ are used. The aim of the current study was to investigate the prevalence of recorded consultations for IBS and to explore the symptom profile of patients with IBS using data from the Salford Integrated Record (SIR). Methods This was a database study using the SIR, a local patient sharing record system integrating primary, community and secondary care information. Records were obtained for a cohort of patients with gastrointestinal disorders from January 2002 to December 2011. Prevalence rates, symptom recording, medication prescribing and referral patterns were compared for three patient groups (IBS, abdominal pain (AP) and Inflammatory Bowel Disease (IBD)). Results The prevalence of IBS (age standardised rate: 616 per year per 100,000 population) was much lower than expected compared with that reported in the literature. The majority of patients (69%) had no gastrointestinal symptoms recorded in the year prior to their IBS. However a proportion of these (22%) were likely to have been prescribed NICE guideline recommended medications for IBS in that year. The findings for AP and IBD were similar. Conclusions Using Read Codes to identify patients with IBS may lead to a large underestimate of the community prevalence. The IBS diagnostic Read Code was rarely applied in practice. There are similarities with many other medically unexplained symptoms which are typically difficult to diagnose in clinical practice. PMID:24295337
Harkness, Elaine F; Grant, Laura; O'Brien, Sarah J; Chew-Graham, Carolyn A; Thompson, David G
2013-12-02
Estimates of the prevalence of irritable bowel syndrome (IBS) vary widely, and a large proportion of patients report having consulted their general practitioner (GP). In patients with new onset gastrointestinal symptoms in primary care it might be possible to predict those at risk of persistent symptoms. However, one of the difficulties is identifying patients within primary care. GPs use a variety of Read Codes to describe patients presenting with IBS. Furthermore, in a qualitative study, exploring GPs' attitudes and approaches to defining patients with IBS, GPs appeared reluctant to add the IBS Read Code to the patient record until more serious conditions were ruled out. Consequently, symptom codes such as 'abdominal pain', 'diarrhoea' or 'constipation' are used. The aim of the current study was to investigate the prevalence of recorded consultations for IBS and to explore the symptom profile of patients with IBS using data from the Salford Integrated Record (SIR). This was a database study using the SIR, a local patient sharing record system integrating primary, community and secondary care information. Records were obtained for a cohort of patients with gastrointestinal disorders from January 2002 to December 2011. Prevalence rates, symptom recording, medication prescribing and referral patterns were compared for three patient groups (IBS, abdominal pain (AP) and Inflammatory Bowel Disease (IBD)). The prevalence of IBS (age standardised rate: 616 per year per 100,000 population) was much lower than expected compared with that reported in the literature. The majority of patients (69%) had no gastrointestinal symptoms recorded in the year prior to their IBS. However a proportion of these (22%) were likely to have been prescribed NICE guideline recommended medications for IBS in that year. The findings for AP and IBD were similar. Using Read Codes to identify patients with IBS may lead to a large underestimate of the community prevalence. The IBS diagnostic Read Code was rarely applied in practice. There are similarities with many other medically unexplained symptoms which are typically difficult to diagnose in clinical practice.
Schultz, Douglas S; Brabender, Virginia M
2013-01-01
To determine the effects of reading the Wikipedia article on the Rorschach on Comprehensive System variables, participants in this study (recruited from parent-teacher associations, online message boards, and graduate schools; N = 50) were provided with either a copy of the Wikipedia article on the Rorschach (from April 2010) or an irrelevant article, then administered the Rorschach and instructed to "fake good." Monetary incentives were used to increase motivation to dissimulate. Initial results indicated that participants given the Wikipedia article produced a lower number of responses (R) and had higher scores on Populars, X+%, XA%, and WDA% as compared to controls. However, post-hoc analyses revealed that when the influence of Populars was controlled, significant differences for X+%, XA%, and WDA% disappeared. No significant differences were found for Form%, Zf, Blends, or PER, although post-hoc analyses controlling for differences in R revealed a significant difference between groups on Zf%. Limitations of the study and implications for clinical and forensic practice are discussed.
Channel modeling, signal processing and coding for perpendicular magnetic recording
NASA Astrophysics Data System (ADS)
Wu, Zheng
With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by combining the new detector with a simple write precompensation scheme. Soft-decision decoding for algebraic codes can improve performance for magnetic recording systems. In this dissertation, we propose two soft-decision decoding methods for tensor-product parity codes. We also present a list decoding algorithm for generalized error locating codes.
NASA Technical Reports Server (NTRS)
2004-01-01
Two-dimensional data matrix symbols, which contain encoded letters and numbers, are permanently etched on items for identification. They can store up to 100 times more information than traditional bar codes. While the symbols provide several advantages over bar codes, once they are covered by paint they can no longer be read by optical scanners. Since most products are painted eventually, this presents a problem for industries relying on the symbols for identification and tracking. In 1987, NASA s Marshall Space Flight Center began studying direct parts marking with matrix symbols in order to track millions of Space Shuttle parts. Advances in the technology proved that by incorporating magnetic properties into the paints, inks, and pastes used to apply the matrix symbols, the codes could be read by a magnetic scanner even after being covered with paint or other coatings. NASA received a patent for such a scanner in 1998, but the system it used for development was not portable and was too costly. A prototype was needed as a lead-in to a production model. In the summer of 2000, NASA began seeking companies to build a hand-held scanner that would detect the Read Through Paint data matrix identification marks containing magnetic materials through coatings.
Environmental Mapping by a HERO-1 Robot Using Sonar and a Laser Barcode Scanner.
1983-12-01
can be labled with an x-y type coordinate grid allowing the rover to directly read * its location as it moves along. A different approach is to...uses a two-dimensional grid of two character barcodes as reference objects. Since bar codes are designed to be read in either of two orientations (top...Processing Laboratory at AFIT (see Appendix B for listing). Navigation grid codes consist of two digits running consecutively from 00 to FF, yielding 256
Reading the Past to Inform the Future: 25 Years of "The Reading Teacher"
ERIC Educational Resources Information Center
Mohr, Kathleen A. J.; Ding, Guoqin; Strong, Ashley; Branum, Lezlie; Watson, Nanette; Priestley, K. Lea; Juth, Stephanie; Carpenter, Neil; Lundstrom, Kacy
2017-01-01
This analysis examines articles from the past 25 years of "The Reading Teacher" to better understand the journal's content and trends influencing literacy instruction. A research team coded and analyzed the frequency of topics and grade levels targeted, then compared results with those of a similar analysis published in 1992. The Web of…
ERIC Educational Resources Information Center
Skibbe, Lori E.; Moody, Amelia J.; Justice, Laura M.; McGinty, Anita S.
2010-01-01
The current study describes the storybook reading behaviors of 45 preschoolers [30 with language impairment (LI) and 15 with typical language (TL)] and their mothers. Each dyad was observed reading a storybook within their homes, and sessions were subsequently coded for indicators of emotional and instructional quality as well as for child…
Pitch Error Analysis of Young Piano Students' Music Reading Performances
ERIC Educational Resources Information Center
Rut Gudmundsdottir, Helga
2010-01-01
This study analyzed the music reading performances of 6-13-year-old piano students (N = 35) in their second year of piano study. The stimuli consisted of three piano pieces, systematically constructed to vary in terms of left-hand complexity and input simultaneity. The music reading performances were recorded digitally and a code of error analysis…
FORMAS--Feedback to Oral Reading Analysis System. Training Manual. Manual No. 5085.
ERIC Educational Resources Information Center
Hoffman, J. V.; And Others
The Feedback to Oral Reading Miscue Analysis System (FORMAS) is a low-inference coding system developed to characterize verbal interaction between teacher and students during oral reading instruction. The six lessons presented in this manual are designed to teach the use of FORMAS in approximately ten hours. Each of the lessons deals with one of…
Holographic Labeling And Reading Machine For Authentication And Security Appications
Weber, David C.; Trolinger, James D.
1999-07-06
A holographic security label and automated reading machine for marking and subsequently authenticating any object such as an identification badge, a pass, a ticket, a manufactured part, or a package is described. The security label is extremely difficult to copy or even to read by unauthorized persons. The system comprises a holographic security label that has been created with a coded reference wave, whose specification can be kept secret. The label contains information that can be extracted only with the coded reference wave, which is derived from a holographic key, which restricts access of the information to only the possessor of the key. A reading machine accesses the information contained in the label and compares it with data stored in the machine through the application of a joint transform correlator, which is also equipped with a reference hologram that adds additional security to the procedure.
On forward inferences of fast and slow readers. An eye movement study
Hawelka, Stefan; Schuster, Sarah; Gagl, Benjamin; Hutzler, Florian
2015-01-01
Unimpaired readers process words incredibly fast and hence it was assumed that top-down processing, such as predicting upcoming words, would be too slow to play an appreciable role in reading. This runs counter the major postulate of the predictive coding framework that our brain continually predicts probable upcoming sensory events. This means, it may generate predictions about the probable upcoming word during reading (dubbed forward inferences). Trying to asses these contradictory assumptions, we evaluated the effect of the predictability of words in sentences on eye movement control during silent reading. Participants were a group of fluent (i.e., fast) and a group of speed-impaired (i.e., slow) readers. The findings indicate that fast readers generate forward inferences, whereas speed-impaired readers do so to a reduced extent - indicating a significant role of predictive coding for fluent reading. PMID:25678030
Space-Time Processing for Tactical Mobile Ad Hoc Networks
2010-05-01
Spatial Diversity and Imperfect Channel Estimation on Wideband MC- DS - CDMA and MC- CDMA " IEEE Transactions on Communications, Vol. 57, No. 10, pp. 2988...include direct sequence code division multiple access ( DS - CDMA ), Frequency Hopped (FH) CDMA and Orthogonal Frequency Division Multiple Access (OFDMA...capability, LPD/LPI, and operability in non-continuous spectrum. In addition, FH- CDMA is robust to the near-far problem, while DS - CDMA requires
Book reading styles in dual-parent and single-mother families.
Blake, Joanna; Macdonald, Silvana; Bayrami, Lisa; Agosta, Vanessa; Milian, Andrea
2006-09-01
Whereas many studies have investigated quantitative aspects of book reading (frequency), few have examined qualitative aspects, especially in very young children and through direct observations of shared reading. The purpose of this study was to determine possible differences in book-reading styles between mothers and fathers and between mothers from single- and dual-parent families. It also related types of parental verbalizations during book reading to children's reported language measures. Dual-parent (29) and single-parent (24) families were observed in shared book reading with their toddlers (15-month-olds) or young preschoolers (27-month-olds). Parent-child dyads were videotaped while book reading. The initiator of each book-reading episode was coded. Parents' verbalizations were exhaustively coded into 10 categories. Mothers completed the MacArthur Communicative Development Inventory, and the children were given the Bayley scales. All parents differentiated their verbalizations according to the age rather than the gender of the child, but single mothers imitated female children more than males. Few differences in verbalizations were found between mothers and fathers or between mothers from single- and dual-parent families. Fathers allowed younger children to initiate book-reading episodes more than mothers. For both age groups of children, combined across families, verbalizations that related the book to the child's experience were correlated with reported language measures. Questions and imitations were related to language measures for the older age group. The important types of parental verbalizations during shared book reading for children's language acquisition are relating, questions and imitations.
ERIC Educational Resources Information Center
Yamamoto, Kentaro; He, Qiwei; Shin, Hyo Jeong; von Davier, Mattias
2017-01-01
Approximately a third of the Programme for International Student Assessment (PISA) items in the core domains (math, reading, and science) are constructed-response items and require human coding (scoring). This process is time-consuming, expensive, and prone to error as often (a) humans code inconsistently, and (b) coding reliability in…
AbouHaidar, Mounir Georges; Venkataraman, Srividhya; Golshani, Ashkan; Liu, Bolin; Ahmad, Tauqeer
2014-10-07
The highly structured (64% GC) covalently closed circular (CCC) RNA (220 nt) of the virusoid associated with rice yellow mottle virus codes for a 16-kDa highly basic protein using novel modalities for coding, translation, and gene expression. This CCC RNA is the smallest among all known viroids and virusoids and the only one that codes proteins. Its sequence possesses an internal ribosome entry site and is directly translated through two (or three) completely overlapping ORFs (shifting to a new reading frame at the end of each round). The initiation and termination codons overlap UGAUGA (underline highlights the initiation codon AUG within the combined initiation-termination sequence). Termination codons can be ignored to obtain larger read-through proteins. This circular RNA with no noncoding sequences is a unique natural supercompact "nanogenome."
Semantic and visual memory codes in learning disabled readers.
Swanson, H L
1984-02-01
Two experiments investigated whether learning disabled readers' impaired recall is due to multiple coding deficiencies. In Experiment 1, learning disabled and skilled readers viewed nonsense pictures without names or with either relevant or irrelevant names with respect to the distinctive characteristics of the picture. Both types of names improved recall of nondisabled readers, while learning disabled readers exhibited better recall for unnamed pictures. No significant difference in recall was found between name training (relevant, irrelevant) conditions within reading groups. In Experiment 2, both reading groups participated in recall training for complex visual forms labeled with unrelated words, hierarchically related words, or without labels. A subsequent reproduction transfer task showed a facilitation in performance in skilled readers due to labeling, with learning disabled readers exhibiting better reproduction for unnamed pictures. Measures of output organization (clustering) indicated that recall is related to the development of superordinate categories. The results suggest that learning disabled children's reading difficulties are due to an inability to activate a semantic representation that interconnects visual and verbal codes.
Staggering of angular momentum distribution in fission
NASA Astrophysics Data System (ADS)
Tamagno, Pierre; Litaize, Olivier
2018-03-01
We review here the role of angular momentum distributions in the fission process. To do so the algorithm implemented in the FIFRELIN code [?] is detailed with special emphasis on the place of fission fragment angular momenta. The usual Rayleigh distribution used for angular momentum distribution is presented and the related model derivation is recalled. Arguments are given to justify why this distribution should not hold for low excitation energy of the fission fragments. An alternative ad hoc expression taking into account low-lying collectiveness is presented as has been implemented in the FIFRELIN code. Yet on observables currently provided by the code, no dramatic impact has been found. To quantify the magnitude of the impact of the low-lying staggering in the angular momentum distribution, a textbook case is considered for the decay of the 144Ba nucleus with low excitation energy.
Georgiou, George K; Das, J P
2016-01-01
The purpose of this study was two-fold: (a) to examine what component of executive functions (EF) - planning and working memory - predicts reading comprehension in young adults (Study 1), and (b) to examine if less skilled comprehenders experience deficits in the EF components (Study 2). In Study 1, we assessed 178 university students (120 females; mean age=21.82 years) on planning (Planned Connections, Planned Codes, and Planned Patterns), working memory (Listening Span, Digit Span Backward, and Digit Memory), and reading comprehension (Nelson-Denny Reading Test). The results of structural equation modeling indicated that only planning was a significant predictor of reading comprehension. In Study 2, we assessed 30 university students with a specific reading comprehension deficit (19 females; mean age=23.01 years) and 30 controls (18 females; mean age=22.77 years) on planning (Planned Connections and Crack the Code) and working memory (Listening Span and Digit Span Backward). The results showed that less skilled comprehenders performed significantly poorer than controls only in planning. Taken together, the findings of both studies suggest that planning is the preeminent component of EF that is driving its relationship with reading comprehension in young adults. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Meeks, Linda; Stephenson, Jennifer; Kemp, Coral; Madelaine, Alison
2016-01-01
This review examined studies that had addressed opinions of pre-service teachers (PSTs) concerning their preparedness for teaching early reading skills to all students, the extent of their content knowledge, and their attitudes towards code-based and/or meaning-based approaches to early reading. From the limited amount of research available, it…
Perceptual uncertainty is a property of the cognitive system.
Perea, Manuel; Carreiras, Manuel
2012-10-01
We qualify Frost's proposals regarding letter-position coding in visual word recognition and the universal model of reading. First, we show that perceptual uncertainty regarding letter position is not tied to European languages-instead it is a general property of the cognitive system. Second, we argue that a universal model of reading should incorporate a developmental view of the reading process.
Conferring in the CAFÉ: One-to-One Reading Conferences in Two First Grade Classrooms
ERIC Educational Resources Information Center
Pletcher, Bethanie; Christensen, Rosalynn
2017-01-01
The purpose of this qualitative descriptive case study was to explore the teacher/student reading conferences in two first grade teachers' classrooms in one primary school. Sixteen one-to-one reading conferences were recorded and transcribed over a two-month period and coded for content as related to the CAFÉ (Boushey & Moser, 2009) model of…
ERIC Educational Resources Information Center
Cromley, Jennifer G.; Wills, Theodore W.
2016-01-01
Van den Broek's landscape model explicitly posits sequences of moves during reading in real time. Two other models that implicitly describe sequences of processes during reading are tested in the present research. Coded think-aloud data from 24 undergraduate students reading scientific text were analysed with lag-sequential techniques to compare…
DNA Barcoding through Quaternary LDPC Codes
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6. PMID:26492348
DNA Barcoding through Quaternary LDPC Codes.
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).
Software Architecture of Sensor Data Distribution In Planetary Exploration
NASA Technical Reports Server (NTRS)
Lee, Charles; Alena, Richard; Stone, Thom; Ossenfort, John; Walker, Ed; Notario, Hugo
2006-01-01
Data from mobile and stationary sensors will be vital in planetary surface exploration. The distribution and collection of sensor data in an ad-hoc wireless network presents a challenge. Irregular terrain, mobile nodes, new associations with access points and repeaters with stronger signals as the network reconfigures to adapt to new conditions, signal fade and hardware failures can cause: a) Data errors; b) Out of sequence packets; c) Duplicate packets; and d) Drop out periods (when node is not connected). To mitigate the effects of these impairments, a robust and reliable software architecture must be implemented. This architecture must also be tolerant of communications outages. This paper describes such a robust and reliable software infrastructure that meets the challenges of a distributed ad hoc network in a difficult environment and presents the results of actual field experiments testing the principles and actual code developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cole, Pamala C.; Richman, Eric E.
2008-09-01
Feeling dim from energy code confusion? Read on to give your inspections a charge. The U.S. Department of Energy’s Building Energy Codes Program addresses hundreds of inquiries from the energy codes community every year. This article offers clarification for topics of confusion submitted to BECP Technical Support of interest to electrical inspectors, focusing on the residential and commercial energy code requirements based on the most recently published 2006 International Energy Conservation Code® and ANSI/ASHRAE/IESNA1 Standard 90.1-2004.
The efficacy of print and video in correcting cognitive misconceptions in science
NASA Astrophysics Data System (ADS)
Finney, Mary Jo
One hundred fifty-three fifth grade students found to have misconceptions about seasonal change were randomly assigned to either a video-print or print-video group. In Study One, each group read or viewed content about seasonal change and a free recall, multiple choice and application task were administered during the following week. Two weeks later, Study Two replicated the procedures with the groups receiving content in the alternate media. Hypotheses predicting video would be more effective than print in correcting misconceptions were rejected since there was either no significance on the measures or performance was higher after reading. Exposure to both media favored the video-print order. Low and high ability readers performed better after print treatment with no significant difference between media among average ability readers. More concepts than content vocabulary were present in written responses by both video and print groups. Post-hoc analysis revealed no gender differences, no significant difference in length of free recall between Study One and Study Two and significant differences between reading abilities on all measures.
The View Behind and Ahead: Implications of Certification *
Darling, Louise
1973-01-01
The Medical Library Association's certification plan, never of real significance in employment and promotion practices in health sciences librarianship, does not reflect the many changes which have occurred in swift progression since adoption of the code in 1949. Solutions to the problems which have accumulated since then are sought in a brief examination of trends in credentialing and certification in the health professions and in the library field, both general and special. Emphasis is given to the historical development of provisions in the MLA Code for the Training and Certification of Medical Librarians, the limited opportunity for practical implementation of most of the provisions, the importance of the code in stimulating the Association's educational programs, the impact of the Medical Library Assistance Act, Regional Medical Programs, and increases in demand for health information on manpower requirements for health science libraries, the specific dissatisfactions MLA members have expressed over certification, and the role of the Ad Hoc Committee to Develop a New Certification Code. PMID:4744343
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paumel, Kevin; Lhuillier, Christian
2015-07-01
Identifying subassemblies by ultrasound is a method that is being considered to prevent handling errors in sodium fast reactors. It is based on the reading of a code (aligned notches) engraved on the subassembly head by an emitting/receiving ultrasonic sensor. This reading is carried out in sodium with high temperature transducers. The resulting one-dimensional C-scan can be likened to a binary code expressing the subassembly type and number. The first test performed in water investigated two parameters: width and depth of the notches. The code remained legible for notches as thin as 1.6 mm wide. The impact of the depthmore » seems minor in the range under investigation. (authors)« less
AbouHaidar, Mounir Georges; Venkataraman, Srividhya; Golshani, Ashkan; Liu, Bolin; Ahmad, Tauqeer
2014-01-01
The highly structured (64% GC) covalently closed circular (CCC) RNA (220 nt) of the virusoid associated with rice yellow mottle virus codes for a 16-kDa highly basic protein using novel modalities for coding, translation, and gene expression. This CCC RNA is the smallest among all known viroids and virusoids and the only one that codes proteins. Its sequence possesses an internal ribosome entry site and is directly translated through two (or three) completely overlapping ORFs (shifting to a new reading frame at the end of each round). The initiation and termination codons overlap UGAUGA (underline highlights the initiation codon AUG within the combined initiation-termination sequence). Termination codons can be ignored to obtain larger read-through proteins. This circular RNA with no noncoding sequences is a unique natural supercompact “nanogenome.” PMID:25253891
A Scalable and Dynamic Testbed for Conducting Penetration-Test Training in a Laboratory Environment
2015-03-01
entry point through which to execute a payload to accomplish a higher-level goal: executing arbitrary code, escalating privileges , pivoting...Mobile Ad Hoc Network Emulator (EMANE)26 can emulate the entire network stack (physical to application -layer protocols). 2. Methodology To build a...to host Windows, Linux, MacOS, Android , and other operating systems without much effort. 4 E. A simple and automatic “restore” function: Many
Ziegler, Johannes C; Bertrand, Daisy; Lété, Bernard; Grainger, Jonathan
2014-04-01
The present study used a variant of masked priming to track the development of 2 marker effects of orthographic and phonological processing from Grade 1 through Grade 5 in a cross-sectional study. Pseudohomophone (PsH) priming served as a marker for phonological processing, whereas transposed-letter (TL) priming was a marker for coarse-grained orthographic processing. The results revealed a clear developmental picture. First, the PsH priming effect was significant and remained stable across development, suggesting that phonology not only plays an important role in early reading development but continues to exert a robust influence throughout reading development. This finding challenges the view that more advanced readers should rely less on phonological information than younger readers. Second, the TL priming effect increased monotonically with grade level and reading age, which suggests greater reliance on coarse-grained orthographic coding as children become better readers. Thus, TL priming effects seem to be a good marker effect for children's ability to use coarse-grained orthographic coding to speed up direct lexical access in alphabetic languages. The results were predicted by the dual-route model of orthographic processing, which suggests that direct orthographic access is achieved through coarse-grained orthographic coding that tolerates some degree of flexibility in letter order. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Bertke, S J; Meyers, A R; Wurzelbacher, S J; Bell, J; Lampl, M L; Robins, D
2012-12-01
Tracking and trending rates of injuries and illnesses classified as musculoskeletal disorders caused by ergonomic risk factors such as overexertion and repetitive motion (MSDs) and slips, trips, or falls (STFs) in different industry sectors is of high interest to many researchers. Unfortunately, identifying the cause of injuries and illnesses in large datasets such as workers' compensation systems often requires reading and coding the free form accident text narrative for potentially millions of records. To alleviate the need for manual coding, this paper describes and evaluates a computer auto-coding algorithm that demonstrated the ability to code millions of claims quickly and accurately by learning from a set of previously manually coded claims. The auto-coding program was able to code claims as a musculoskeletal disorders, STF or other with approximately 90% accuracy. The program developed and discussed in this paper provides an accurate and efficient method for identifying the causation of workers' compensation claims as a STF or MSD in a large database based on the unstructured text narrative and resulting injury diagnoses. The program coded thousands of claims in minutes. The method described in this paper can be used by researchers and practitioners to relieve the manual burden of reading and identifying the causation of claims as a STF or MSD. Furthermore, the method can be easily generalized to code/classify other unstructured text narratives. Published by Elsevier Ltd.
ERIC Educational Resources Information Center
Cook, Jimmie
1996-01-01
Claims that reading and writing are closely related at all grade levels. Points out that reading aloud; sharing quality children's literature; and incorporating activities such as recitation, singing, and poetry can facilitate the transition from oral to written language codes. Proves that teacher participation in such activities can encourage…
Exploring Temporal Progression of Events Using Eye Tracking.
Welke, Tinka; Raisig, Susanne; Hagendorf, Herbert; van der Meer, Elke
2016-07-01
This study investigates the representation of the temporal progression of events by means of the causal change in a patient. Subjects were asked to verify the relationship between adjectives denoting a source and resulting feature of a patient. The features were presented either chronologically or inversely to a primed event context given by a verb (to cut: long-short vs. short-long). Effects on response time and on eye movement data show that the relationship between features presented chronologically is verified more easily than that between features presented inversely. Post hoc, however, we found that the effects of temporal order occurred only when subjects read the features more than once. Then, the relationship between the features is matched with the causal change implied by the event context (contextual strategy). When subjects read the features only once, subjects respond to the relationship between the features without taking into account the event context. Copyright © 2015 Cognitive Science Society, Inc.
ERIC Educational Resources Information Center
Leavitt, Brynn C.
2010-01-01
What do we do when we read in another language? How do we make sense of the lexical and syntactic structures on the page? This study's development and use of the Miscue Coding for Metacognitive Strategies (MCMS), a foreign language assessment tool, offers language students and instructors a holistic approach to considering these questions. In this…
Reinhardt, Josephine A.; Wanjiru, Betty M.; Brant, Alicia T.; Saelao, Perot; Begun, David J.; Jones, Corbin D.
2013-01-01
How non-coding DNA gives rise to new protein-coding genes (de novo genes) is not well understood. Recent work has revealed the origins and functions of a few de novo genes, but common principles governing the evolution or biological roles of these genes are unknown. To better define these principles, we performed a parallel analysis of the evolution and function of six putatively protein-coding de novo genes described in Drosophila melanogaster. Reconstruction of the transcriptional history of de novo genes shows that two de novo genes emerged from novel long non-coding RNAs that arose at least 5 MY prior to evolution of an open reading frame. In contrast, four other de novo genes evolved a translated open reading frame and transcription within the same evolutionary interval suggesting that nascent open reading frames (proto-ORFs), while not required, can contribute to the emergence of a new de novo gene. However, none of the genes arose from proto-ORFs that existed long before expression evolved. Sequence and structural evolution of de novo genes was rapid compared to nearby genes and the structural complexity of de novo genes steadily increases over evolutionary time. Despite the fact that these genes are transcribed at a higher level in males than females, and are most strongly expressed in testes, RNAi experiments show that most of these genes are essential in both sexes during metamorphosis. This lethality suggests that protein coding de novo genes in Drosophila quickly become functionally important. PMID:24146629
Ohno, S
1984-01-01
Three outstanding properties uniquely qualify repeats of base oligomers as the primordial coding sequences of all polypeptide chains. First, when compared with randomly generated base sequences in general, they are more likely to have long open reading frames. Second, periodical polypeptide chains specified by such repeats are more likely to assume either alpha-helical or beta-sheet secondary structures than are polypeptide chains of random sequence. Third, provided that the number of bases in the oligomeric unit is not a multiple of 3, these internally repetitious coding sequences are impervious to randomly sustained base substitutions, deletions, and insertions. This is because the recurring periodicity of their polypeptide chains is given by three consecutive copies of the oligomeric unit translated in three different reading frames. Accordingly, when one reading frame is open, the other two are automatically open as well, all three being capable of coding for polypeptide chains of identical periodicity. Under this circumstance, a frame shift due to the deletion or insertion of a number of bases that is not a multiple of 3 fails to alter the down-stream amino acid sequence, and even a base change causing premature chain-termination can silence only one of the three potential coding units. Newly arisen coding sequences in modern organisms are oligomeric repeats, and most of the older genes retain various vestiges of their original internal repetitions. Some of the genes (e.g., oncogenes) have even inherited the property of being impervious to randomly sustained base changes.
Lipinska, B; Rao, A S; Bolten, B M; Balakrishnan, R; Goldberg, E B
1989-01-01
We sequenced bacteriophage T4 genes 2 and 3 and the putative C-terminal portion of gene 50. They were found to have appropriate open reading frames directed counterclockwise on the T4 map. Mutations in genes 2 and 64 were shown to be in the same open reading frame, which we now call gene 2. This gene codes for a protein of 27,068 daltons. The open reading frame corresponding to gene 3 codes for a protein of 20,634 daltons. Appropriate bands on polyacrylamide gels were identified at 30 and 20 kilodaltons, respectively. We found that the product of the cloned gene 2 can protect T4 DNA double-stranded ends from exonuclease V action. Images PMID:2644202
40 CFR Appendix III to Subpart S... - As-Received Inspection
Code of Federal Regulations, 2010 CFR
2010-07-01
...) General Compliance Provisions for Control of Air Pollution From New and In-Use Light-Duty Vehicles, Light... Reading 7. Build Date 8. MIL light on/off status 9. Readiness code status 10. Stored OBD codes 11.Any...
Verification of the predictive capabilities of the 4C code cryogenic circuit model
NASA Astrophysics Data System (ADS)
Zanino, R.; Bonifetto, R.; Hoa, C.; Richard, L. Savoldi
2014-01-01
The 4C code was developed to model thermal-hydraulics in superconducting magnet systems and related cryogenic circuits. It consists of three coupled modules: a quasi-3D thermal-hydraulic model of the winding; a quasi-3D model of heat conduction in the magnet structures; an object-oriented a-causal model of the cryogenic circuit. In the last couple of years the code and its different modules have undergone a series of validation exercises against experimental data, including also data coming from the supercritical He loop HELIOS at CEA Grenoble. However, all this analysis work was done each time after the experiments had been performed. In this paper a first demonstration is given of the predictive capabilities of the 4C code cryogenic circuit module. To do that, a set of ad-hoc experimental scenarios have been designed, including different heating and control strategies. Simulations with the cryogenic circuit module of 4C have then been performed before the experiment. The comparison presented here between the code predictions and the results of the HELIOS measurements gives the first proof of the excellent predictive capability of the 4C code cryogenic circuit module.
Codes, Ciphers, and Cryptography--An Honors Colloquium
ERIC Educational Resources Information Center
Karls, Michael A.
2010-01-01
At the suggestion of a colleague, I read "The Code Book", [32], by Simon Singh to get a basic introduction to the RSA encryption scheme. Inspired by Singh's book, I designed a Ball State University Honors Colloquium in Mathematics for both majors and non-majors, with material coming from "The Code Book" and many other sources. This course became…
Parallel Subspace Subcodes of Reed-Solomon Codes for Magnetic Recording Channels
ERIC Educational Resources Information Center
Wang, Han
2010-01-01
Read channel architectures based on a single low-density parity-check (LDPC) code are being considered for the next generation of hard disk drives. However, LDPC-only solutions suffer from the error floor problem, which may compromise reliability, if not handled properly. Concatenated architectures using an LDPC code plus a Reed-Solomon (RS) code…
The social disutility of software ownership.
Douglas, David M
2011-09-01
Software ownership allows the owner to restrict the distribution of software and to prevent others from reading the software's source code and building upon it. However, free software is released to users under software licenses that give them the right to read the source code, modify it, reuse it, and distribute the software to others. Proponents of free software such as Richard M. Stallman and Eben Moglen argue that the social disutility of software ownership is a sufficient justification for prohibiting it. This social disutility includes the social instability of disregarding laws and agreements covering software use and distribution, inequality of software access, and the inability to help others by sharing software with them. Here I consider these and other social disutility claims against withholding specific software rights from users, in particular, the rights to read the source code, duplicate, distribute, modify, imitate, and reuse portions of the software within new programs. I find that generally while withholding these rights from software users does cause some degree of social disutility, only the rights to duplicate, modify and imitate cannot legitimately be denied to users on this basis. The social disutility of withholding the rights to distribute the software, read its source code and reuse portions of it in new programs is insufficient to prohibit software owners from denying them to users. A compromise between the software owner and user can minimise the social disutility of withholding these particular rights from users. However, the social disutility caused by software patents is sufficient for rejecting such patents as they restrict the methods of reducing social disutility possible with other forms of software ownership.
Is the orthographic/phonological onset a single unit in reading aloud?
Mousikou, Petroula; Coltheart, Max; Saunders, Steven; Yen, Lisa
2010-02-01
Two main theories of visual word recognition have been developed regarding the way orthographic units in printed words map onto phonological units in spoken words. One theory suggests that a string of single letters or letter clusters corresponds to a string of phonemes (Coltheart, 1978; Venezky, 1970), while the other suggests that a string of single letters or letter clusters corresponds to coarser phonological units, for example, onsets and rimes (Treiman & Chafetz, 1987). These theoretical assumptions were critical for the development of coding schemes in prominent computational models of word recognition and reading aloud. In a reading-aloud study, we tested whether the human reading system represents the orthographic/phonological onset of printed words and nonwords as single units or as separate letters/phonemes. Our results, which favored a letter and not an onset-coding scheme, were successfully simulated by the dual-route cascaded (DRC) model (Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001). A separate experiment was carried out to further adjudicate between 2 versions of the DRC model.
Ramirez, Lisa Marie S; He, Muhan; Mailloux, Shay; George, Justin; Wang, Jun
2016-06-01
Microparticles carrying quick response (QR) barcodes are fabricated by J. Wang and co-workers on page 3259, using a massive coding of dissociated elements (MiCODE) technology. Each microparticle can bear a special custom-designed QR code that enables encryption or tagging with unlimited multiplexity, and the QR code can be easily read by cellphone applications. The utility of MiCODE particles in multiplexed DNA detection and microtagging for anti-counterfeiting is explored. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Roch, Maja; Florit, Elena; Levorato, M Chiara
2012-01-01
The current study was designed to investigate the role played by verbal memory in the advantage shown by individuals with Down syndrome in reading over listening text comprehension (Roch & Levorato, 2009). Two different aspects of verbal memory were analyzed: processing load and coding modality. Participants were 20 individuals with Down syndrome, aged between 11 and 26 years who were matched for reading comprehension with a group of 20 typically developing children aged between 6;3 and 7;3 years. The two groups were presented with a listening comprehension test and four verbal memory tasks in which the degree of processing load and the coding modality were manipulated. The results of the study confirmed the advantage of reading over listening comprehension for individuals with Down syndrome. Furthermore, it emerged that different aspects of verbal memory were related respectively to reading and to listening comprehension: visual memory with low processing load was related to the former and oral memory with high processing load to the latter. Finally, it was demonstrated that verbal memory contributed to explain the advantage of reading over listening comprehension in Down syndrome. The results are discussed in light of their theoretical relevance and practical implications. Copyright © 2011 Elsevier Ltd. All rights reserved.
Analytical Modeling of Medium Access Control Protocols in Wireless Networks
2006-03-01
Rician-fading channels. However, no provision was made to consider a multihop ad hoc network and the interdependencies among the nodes. Gitman [54...published what is arguably the first paper that actually dealt with a mul- tihop system. Gitman considered a two-hop centralized network consisting of a...of MIMO space-time coded wireless systems. IEEE Journal on Selected Areas in Communications, 21(3):281–302, April 2003. [54] I. Gitman . On the
Letter to the editor : Impartial review is key.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crabtree, G. W.; Materials Science Division
The News Feature, 'Misconduct in physics: Time to wise up? [Nature 418, 120-121; 2002], raises important issues that the physical-science community must face. Argonne National Laboratory's code of ethics calls for a response very similar to that of Bell Labs, namely: 'The Laboratory director may appoint an ad-hoc scientific review committee to investigate internal or external charges of scientific misconduct, fraud, falsification of data, misinterpretation of data, or other activities involving scientific or technical matters.'
1981-01-01
Air Force, whereas the Army is still in the middle of its program. This means , with reference to the Army, we will either have a surplus or we will ...an ad hoc basis. Somebody had the bright idea, makes some progress, and then needs help in evaluating how that particular new materi- al will stand up...countries? I won’t ask how many of you think that this affects your businesses. I will read a few of the key phrases from the German MOU. Even those of you
Jupyter and Galaxy: Easing entry barriers into complex data analyses for biomedical researchers.
Grüning, Björn A; Rasche, Eric; Rebolledo-Jaramillo, Boris; Eberhard, Carl; Houwaart, Torsten; Chilton, John; Coraor, Nate; Backofen, Rolf; Taylor, James; Nekrutenko, Anton
2017-05-01
What does it take to convert a heap of sequencing data into a publishable result? First, common tools are employed to reduce primary data (sequencing reads) to a form suitable for further analyses (i.e., the list of variable sites). The subsequent exploratory stage is much more ad hoc and requires the development of custom scripts and pipelines, making it problematic for biomedical researchers. Here, we describe a hybrid platform combining common analysis pathways with the ability to explore data interactively. It aims to fully encompass and simplify the "raw data-to-publication" pathway and make it reproducible.
Olier, Ivan; Springate, David A; Ashcroft, Darren M; Doran, Tim; Reeves, David; Planner, Claire; Reilly, Siobhan; Kontopantelis, Evangelos
2016-01-01
The use of Electronic Health Records databases for medical research has become mainstream. In the UK, increasing use of Primary Care Databases is largely driven by almost complete computerisation and uniform standards within the National Health Service. Electronic Health Records research often begins with the development of a list of clinical codes with which to identify cases with a specific condition. We present a methodology and accompanying Stata and R commands (pcdsearch/Rpcdsearch) to help researchers in this task. We present severe mental illness as an example. We used the Clinical Practice Research Datalink, a UK Primary Care Database in which clinical information is largely organised using Read codes, a hierarchical clinical coding system. Pcdsearch is used to identify potentially relevant clinical codes and/or product codes from word-stubs and code-stubs suggested by clinicians. The returned code-lists are reviewed and codes relevant to the condition of interest are selected. The final code-list is then used to identify patients. We identified 270 Read codes linked to SMI and used them to identify cases in the database. We observed that our approach identified cases that would have been missed with a simpler approach using SMI registers defined within the UK Quality and Outcomes Framework. We described a framework for researchers of Electronic Health Records databases, for identifying patients with a particular condition or matching certain clinical criteria. The method is invariant to coding system or database and can be used with SNOMED CT, ICD or other medical classification code-lists.
Bertke, S. J.; Meyers, A. R.; Wurzelbacher, S. J.; Bell, J.; Lampl, M. L.; Robins, D.
2015-01-01
Introduction Tracking and trending rates of injuries and illnesses classified as musculoskeletal disorders caused by ergonomic risk factors such as overexertion and repetitive motion (MSDs) and slips, trips, or falls (STFs) in different industry sectors is of high interest to many researchers. Unfortunately, identifying the cause of injuries and illnesses in large datasets such as workers’ compensation systems often requires reading and coding the free form accident text narrative for potentially millions of records. Method To alleviate the need for manual coding, this paper describes and evaluates a computer auto-coding algorithm that demonstrated the ability to code millions of claims quickly and accurately by learning from a set of previously manually coded claims. Conclusions The auto-coding program was able to code claims as a musculoskeletal disorders, STF or other with approximately 90% accuracy. Impact on industry The program developed and discussed in this paper provides an accurate and efficient method for identifying the causation of workers’ compensation claims as a STF or MSD in a large database based on the unstructured text narrative and resulting injury diagnoses. The program coded thousands of claims in minutes. The method described in this paper can be used by researchers and practitioners to relieve the manual burden of reading and identifying the causation of claims as a STF or MSD. Furthermore, the method can be easily generalized to code/classify other unstructured text narratives. PMID:23206504
ERIC Educational Resources Information Center
Mahecha, Rocío; Urrego, Stella; Lozano, Erika
2011-01-01
In this article we report on an innovation project developed with a group of eleventh graders at a public school in Bogotá. Its aim was to encourage students to improve reading comprehension of texts in English. It was conducted taking into account students' needs, interests and level of English. To do it, we implemented two reading strategies:…
Heim, Stefan; Weidner, Ralph; von Overheidt, Ann-Christin; Tholen, Nicole; Grande, Marion; Amunts, Katrin
2014-03-01
Phonological and visual dysfunctions may result in reading deficits like those encountered in developmental dyslexia. Here, we use a novel approach to induce similar reading difficulties in normal readers in an event-related fMRI study, thus systematically investigating which brain regions relate to different pathways relating to orthographic-phonological (e.g. grapheme-to-phoneme conversion, GPC) vs. visual processing. Based upon a previous behavioural study (Tholen et al. 2011), the retrieval of phonemes from graphemes was manipulated by lowering the identifiability of letters in familiar vs. unfamiliar shapes. Visual word and letter processing was impeded by presenting the letters of a word in a moving, non-stationary manner. FMRI revealed that the visual condition activated cytoarchitectonically defined area hOC5 in the magnocellular pathway and area 7A in the right mesial parietal cortex. In contrast, the grapheme manipulation revealed different effects localised predominantly in bilateral inferior frontal gyrus (left cytoarchitectonic area 44; right area 45) and inferior parietal lobule (including areas PF/PFm), regions that have been demonstrated to show abnormal activation in dyslexic as compared to normal readers. This pattern of activation bears close resemblance to recent findings in dyslexic samples both behaviourally and with respect to the neurofunctional activation patterns. The novel paradigm may thus prove useful in future studies to understand reading problems related to distinct pathways, potentially providing a link also to the understanding of real reading impairments in dyslexia.
Visual and Auditory Memory: Relationships to Reading Achievement.
ERIC Educational Resources Information Center
Bruning, Roger H.; And Others
1978-01-01
Good and poor readers' visual and auditory memory were tested. No group differences existed for single mode presentation in recognition frequency or latency. With multimodal presentation, good readers had faster latencies. Dual coding and self-terminating memory search hypotheses were supported. Implications for the reading process and reading…
Spelling in Adults: The Combined Influences of Language Skills and Reading Experience
ERIC Educational Resources Information Center
Burt, Jennifer S.
2006-01-01
One hundred and twelve university students completed 7 tests assessing word-reading accuracy, print exposure, phonological sensitivity, phonological coding and knowledge of English morphology as predictors of spelling accuracy. Together the tests accounted for 71% of the variance in spelling, with phonological skills and morphological knowledge…
A Comparison of Schools: Teacher Knowledge of Explicit Code-Based Reading Instruction
ERIC Educational Resources Information Center
Cohen, Rebecca A.; Mather, Nancy; Schneider, Deborah A.; White, Jennifer M.
2017-01-01
One-hundred-fourteen kindergarten through third-grade teachers from seven different schools were surveyed using "The Survey of Preparedness and Knowledge of Language Structure Related to Teaching Reading to Struggling Students." The purpose was to compare their definitions and application knowledge of language structure, phonics, and…
Prosodic Encoding in Silent Reading.
ERIC Educational Resources Information Center
Wilkenfeld, Deborah
In silent reading, short-memory tasks, such as semantic and syntactic processing, require a stage of phonetic encoding between visual representation and the actual extraction of meaning, and this encoding includes prosodic as well as segmental features. To test for this suprasegmental coding, an experiment was conducted in which subjects were…
QRAC-the-Code: A Comprehension Monitoring Strategy for Middle School Social Studies Textbooks
ERIC Educational Resources Information Center
Berkeley, Sheri; Riccomini, Paul J.
2013-01-01
Requirements for reading and ascertaining information from text increase as students advance through the educational system, especially in content-rich classes; hence, monitoring comprehension is especially important. However, this is a particularly challenging skill for many students who struggle with reading comprehension, including students…
Laser direct marking applied to rasterizing miniature Data Matrix Code on aluminum alloy
NASA Astrophysics Data System (ADS)
Li, Xia-Shuang; He, Wei-Ping; Lei, Lei; Wang, Jian; Guo, Gai-Fang; Zhang, Teng-Yun; Yue, Ting
2016-03-01
Precise miniaturization of 2D Data Matrix (DM) Codes on Aluminum alloy formed by raster mode laser direct part marking is demonstrated. The characteristic edge over-burn effects, which render vector mode laser direct part marking inadequate for producing precise and readable miniature codes, are minimized with raster mode laser marking. To obtain the control mechanism for the contrast and print growth of miniature DM code by raster laser marking process, the temperature field model of long pulse laser interaction with material is established. From the experimental results, laser average power and Q frequency have an important effect on the contrast and print growth of miniature DM code, and the threshold of laser average power and Q frequency for an identifiable miniature DM code are respectively 3.6 W and 110 kHz, which matches the model well within normal operating conditions. In addition, the empirical model of correlation occurring between laser marking parameters and module size is also obtained, and the optimal processing parameter values for an identifiable miniature DM code of different but certain data size are given. It is also found that an increase of the repeat scanning number effectively improves the surface finish of bore, the appearance consistency of modules, which has benefit to reading. The reading quality of miniature DM code is greatly improved using ultrasonic cleaning in water by avoiding the interference of color speckles surrounding modules.
Michel, Christian J
2017-04-18
In 1996, a set X of 20 trinucleotides was identified in genes of both prokaryotes and eukaryotes which has on average the highest occurrence in reading frame compared to its two shifted frames. Furthermore, this set X has an interesting mathematical property as X is a maximal C 3 self-complementary trinucleotide circular code. In 2015, by quantifying the inspection approach used in 1996, the circular code X was confirmed in the genes of bacteria and eukaryotes and was also identified in the genes of plasmids and viruses. The method was based on the preferential occurrence of trinucleotides among the three frames at the gene population level. We extend here this definition at the gene level. This new statistical approach considers all the genes, i.e., of large and small lengths, with the same weight for searching the circular code X . As a consequence, the concept of circular code, in particular the reading frame retrieval, is directly associated to each gene. At the gene level, the circular code X is strengthened in the genes of bacteria, eukaryotes, plasmids, and viruses, and is now also identified in the genes of archaea. The genes of mitochondria and chloroplasts contain a subset of the circular code X . Finally, by studying viral genes, the circular code X was found in DNA genomes, RNA genomes, double-stranded genomes, and single-stranded genomes.
Technologies for network-centric C4ISR
NASA Astrophysics Data System (ADS)
Dunkelberger, Kirk A.
2003-07-01
Three technologies form the heart of any network-centric command, control, communication, intelligence, surveillance, and reconnaissance (C4ISR) system: distributed processing, reconfigurable networking, and distributed resource management. Distributed processing, enabled by automated federation, mobile code, intelligent process allocation, dynamic multiprocessing groups, check pointing, and other capabilities creates a virtual peer-to-peer computing network across the force. Reconfigurable networking, consisting of content-based information exchange, dynamic ad-hoc routing, information operations (perception management) and other component technologies forms the interconnect fabric for fault tolerant inter processor and node communication. Distributed resource management, which provides the means for distributed cooperative sensor management, foe sensor utilization, opportunistic collection, symbiotic inductive/deductive reasoning and other applications provides the canonical algorithms for network-centric enterprises and warfare. This paper introduces these three core technologies and briefly discusses a sampling of their component technologies and their individual contributions to network-centric enterprises and warfare. Based on the implied requirements, two new algorithms are defined and characterized which provide critical building blocks for network centricity: distributed asynchronous auctioning and predictive dynamic source routing. The first provides a reliable, efficient, effective approach for near-optimal assignment problems; the algorithm has been demonstrated to be a viable implementation for ad-hoc command and control, object/sensor pairing, and weapon/target assignment. The second is founded on traditional dynamic source routing (from mobile ad-hoc networking), but leverages the results of ad-hoc command and control (from the contributed auctioning algorithm) into significant increases in connection reliability through forward prediction. Emphasis is placed on the advantages gained from the closed-loop interaction of the multiple technologies in the network-centric application environment.
NASA Astrophysics Data System (ADS)
Gong, Liang; Wu, Yu; Jian, Qijie; Yin, Chunxiao; Li, Taotao; Gupta, Vijai Kumar; Duan, Xuewu; Jiang, Yueming
2018-01-01
Vibrio qinghaiensis sp.-Q67 (Vqin-Q67) is a freshwater luminescent bacterium that continuously emits blue-green light (485 nm). The bacterium has been widely used for detecting toxic contaminants. Here, we report the complete genome sequence of Vqin-Q67, obtained using third-generation PacBio sequencing technology. Continuous long reads were attained from three PacBio sequencing runs and reads >500 bp with a quality value of >0.75 were merged together into a single dataset. This resultant highly-contiguous de novo assembly has no genome gaps, and comprises two chromosomes with substantial genetic information, including protein-coding genes, non-coding RNA, transposon and gene islands. Our dataset can be useful as a comparative genome for evolution and speciation studies, as well as for the analysis of protein-coding gene families, the pathogenicity of different Vibrio species in fish, the evolution of non-coding RNA and transposon, and the regulation of gene expression in relation to the bioluminescence of Vqin-Q67.
AirShow 1.0 CFD Software Users' Guide
NASA Technical Reports Server (NTRS)
Mohler, Stanley R., Jr.
2005-01-01
AirShow is visualization post-processing software for Computational Fluid Dynamics (CFD). Upon reading binary PLOT3D grid and solution files into AirShow, the engineer can quickly see how hundreds of complex 3-D structured blocks are arranged and numbered. Additionally, chosen grid planes can be displayed and colored according to various aerodynamic flow quantities such as Mach number and pressure. The user may interactively rotate and translate the graphical objects using the mouse. The software source code was written in cross-platform Java, C++, and OpenGL, and runs on Unix, Linux, and Windows. The graphical user interface (GUI) was written using Java Swing. Java also provides multiple synchronized threads. The Java Native Interface (JNI) provides a bridge between the Java code and the C++ code where the PLOT3D files are read, the OpenGL graphics are rendered, and numerical calculations are performed. AirShow is easy to learn and simple to use. The source code is available for free from the NASA Technology Transfer and Partnership Office.
A user's manual for the Loaded Microstrip Antenna Code (LMAC)
NASA Technical Reports Server (NTRS)
Forrai, D. P.; Newman, E. H.
1988-01-01
The use of the Loaded Microstrip Antenna Code is described. The geometry of this antenna is shown and its dimensions are described in terms of the program outputs. The READ statements for the inputs are detailed and typical values are given where applicable. The inputs of four example problems are displayed with the corresponding output of the code given in the appendices.
NASA Technical Reports Server (NTRS)
Wade, Randall S.; Jones, Bailey
2009-01-01
A computer program loads configuration code into a Xilinx field-programmable gate array (FPGA), reads back and verifies that code, reloads the code if an error is detected, and monitors the performance of the FPGA for errors in the presence of radiation. The program consists mainly of a set of VHDL files (wherein "VHDL" signifies "VHSIC Hardware Description Language" and "VHSIC" signifies "very-high-speed integrated circuit").
S-MART, a software toolbox to aid RNA-Seq data analysis.
Zytnicki, Matthias; Quesneville, Hadi
2011-01-01
High-throughput sequencing is now routinely performed in many experiments. But the analysis of the millions of sequences generated, is often beyond the expertise of the wet labs who have no personnel specializing in bioinformatics. Whereas several tools are now available to map high-throughput sequencing data on a genome, few of these can extract biological knowledge from the mapped reads. We have developed a toolbox called S-MART, which handles mapped RNA-Seq data. S-MART is an intuitive and lightweight tool which performs many of the tasks usually required for the analysis of mapped RNA-Seq reads. S-MART does not require any computer science background and thus can be used by all of the biologist community through a graphical interface. S-MART can run on any personal computer, yielding results within an hour even for Gb of data for most queries. S-MART may perform the entire analysis of the mapped reads, without any need for other ad hoc scripts. With this tool, biologists can easily perform most of the analyses on their computer for their RNA-Seq data, from the mapped data to the discovery of important loci.
S-MART, A Software Toolbox to Aid RNA-seq Data Analysis
Zytnicki, Matthias; Quesneville, Hadi
2011-01-01
High-throughput sequencing is now routinely performed in many experiments. But the analysis of the millions of sequences generated, is often beyond the expertise of the wet labs who have no personnel specializing in bioinformatics. Whereas several tools are now available to map high-throughput sequencing data on a genome, few of these can extract biological knowledge from the mapped reads. We have developed a toolbox called S-MART, which handles mapped RNA-Seq data. S-MART is an intuitive and lightweight tool which performs many of the tasks usually required for the analysis of mapped RNA-Seq reads. S-MART does not require any computer science background and thus can be used by all of the biologist community through a graphical interface. S-MART can run on any personal computer, yielding results within an hour even for Gb of data for most queries. S-MART may perform the entire analysis of the mapped reads, without any need for other ad hoc scripts. With this tool, biologists can easily perform most of the analyses on their computer for their RNA-Seq data, from the mapped data to the discovery of important loci. PMID:21998740
DistMap: a toolkit for distributed short read mapping on a Hadoop cluster.
Pandey, Ram Vinay; Schlötterer, Christian
2013-01-01
With the rapid and steady increase of next generation sequencing data output, the mapping of short reads has become a major data analysis bottleneck. On a single computer, it can take several days to map the vast quantity of reads produced from a single Illumina HiSeq lane. In an attempt to ameliorate this bottleneck we present a new tool, DistMap - a modular, scalable and integrated workflow to map reads in the Hadoop distributed computing framework. DistMap is easy to use, currently supports nine different short read mapping tools and can be run on all Unix-based operating systems. It accepts reads in FASTQ format as input and provides mapped reads in a SAM/BAM format. DistMap supports both paired-end and single-end reads thereby allowing the mapping of read data produced by different sequencing platforms. DistMap is available from http://code.google.com/p/distmap/
DistMap: A Toolkit for Distributed Short Read Mapping on a Hadoop Cluster
Pandey, Ram Vinay; Schlötterer, Christian
2013-01-01
With the rapid and steady increase of next generation sequencing data output, the mapping of short reads has become a major data analysis bottleneck. On a single computer, it can take several days to map the vast quantity of reads produced from a single Illumina HiSeq lane. In an attempt to ameliorate this bottleneck we present a new tool, DistMap - a modular, scalable and integrated workflow to map reads in the Hadoop distributed computing framework. DistMap is easy to use, currently supports nine different short read mapping tools and can be run on all Unix-based operating systems. It accepts reads in FASTQ format as input and provides mapped reads in a SAM/BAM format. DistMap supports both paired-end and single-end reads thereby allowing the mapping of read data produced by different sequencing platforms. DistMap is available from http://code.google.com/p/distmap/ PMID:24009693
Auditory Processing in Noise: A Preschool Biomarker for Literacy.
White-Schwoch, Travis; Woodruff Carr, Kali; Thompson, Elaine C; Anderson, Samira; Nicol, Trent; Bradlow, Ann R; Zecker, Steven G; Kraus, Nina
2015-07-01
Learning to read is a fundamental developmental milestone, and achieving reading competency has lifelong consequences. Although literacy development proceeds smoothly for many children, a subset struggle with this learning process, creating a need to identify reliable biomarkers of a child's future literacy that could facilitate early diagnosis and access to crucial early interventions. Neural markers of reading skills have been identified in school-aged children and adults; many pertain to the precision of information processing in noise, but it is unknown whether these markers are present in pre-reading children. Here, in a series of experiments in 112 children (ages 3-14 y), we show brain-behavior relationships between the integrity of the neural coding of speech in noise and phonology. We harness these findings into a predictive model of preliteracy, revealing that a 30-min neurophysiological assessment predicts performance on multiple pre-reading tests and, one year later, predicts preschoolers' performance across multiple domains of emergent literacy. This same neural coding model predicts literacy and diagnosis of a learning disability in school-aged children. These findings offer new insight into the biological constraints on preliteracy during early childhood, suggesting that neural processing of consonants in noise is fundamental for language and reading development. Pragmatically, these findings open doors to early identification of children at risk for language learning problems; this early identification may in turn facilitate access to early interventions that could prevent a life spent struggling to read.
Preparing for in situ processing on upcoming leading-edge supercomputers
Kress, James; Churchill, Randy Michael; Klasky, Scott; ...
2016-10-01
High performance computing applications are producing increasingly large amounts of data and placing enormous stress on current capabilities for traditional post-hoc visualization techniques. Because of the growing compute and I/O imbalance, data reductions, including in situ visualization, are required. These reduced data are used for analysis and visualization in a variety of different ways. Many of he visualization and analysis requirements are known a priori, but when they are not, scientists are dependent on the reduced data to accurately represent the simulation in post hoc analysis. The contributions of this paper is a description of the directions we are pursuingmore » to assist a large scale fusion simulation code succeed on the next generation of supercomputers. Finally, these directions include the role of in situ processing for performing data reductions, as well as the tradeoffs between data size and data integrity within the context of complex operations in a typical scientific workflow.« less
Functional Fixedness in Creative Thinking Tasks Depends on Stimulus Modality.
Chrysikou, Evangelia G; Motyka, Katharine; Nigro, Cristina; Yang, Song-I; Thompson-Schill, Sharon L
2016-11-01
Pictorial examples during creative thinking tasks can lead participants to fixate on these examples and reproduce their elements even when yielding suboptimal creative products. Semantic memory research may illuminate the cognitive processes underlying this effect. Here, we examined whether pictures and words differentially influence access to semantic knowledge for object concepts depending on whether the task is close- or open-ended. Participants viewed either names or pictures of everyday objects, or a combination of the two, and generated common, secondary, or ad hoc uses for them. Stimulus modality effects were assessed quantitatively through reaction times and qualitatively through a novel coding system, which classifies creative output on a continuum from top-down-driven to bottom-up-driven responses. Both analyses revealed differences across tasks. Importantly, for ad hoc uses, participants exposed to pictures generated more top-down-driven responses than those exposed to object names. These findings have implications for accounts of functional fixedness in creative thinking, as well as theories of semantic memory for object concepts.
Functional Fixedness in Creative Thinking Tasks Depends on Stimulus Modality
Chrysikou, Evangelia G.; Motyka, Katharine; Nigro, Cristina; Yang, Song-I; Thompson-Schill, Sharon L.
2015-01-01
Pictorial examples during creative thinking tasks can lead participants to fixate on these examples and reproduce their elements even when yielding suboptimal creative products. Semantic memory research may illuminate the cognitive processes underlying this effect. Here, we examined whether pictures and words differentially influence access to semantic knowledge for object concepts depending on whether the task is close- or open-ended. Participants viewed either names or pictures of everyday objects, or a combination of the two, and generated common, secondary, or ad hoc uses for them. Stimulus modality effects were assessed quantitatively through reaction times and qualitatively through a novel coding system, which classifies creative output on a continuum from top-down-driven to bottom-up-driven responses. Both analyses revealed differences across tasks. Importantly, for ad hoc uses, participants exposed to pictures generated more top-down-driven responses than those exposed to object names. These findings have implications for accounts of functional fixedness in creative thinking, as well as theories of semantic memory for object concepts. PMID:28344724
Germ-line and somatic EPHA2 coding variants in lens aging and cataract.
Bennett, Thomas M; M'Hamdi, Oussama; Hejtmancik, J Fielding; Shiels, Alan
2017-01-01
Rare germ-line mutations in the coding regions of the human EPHA2 gene (EPHA2) have been associated with inherited forms of pediatric cataract, whereas, frequent, non-coding, single nucleotide variants (SNVs) have been associated with age-related cataract. Here we sought to determine if germ-line EPHA2 coding SNVs were associated with age-related cataract in a case-control DNA panel (> 50 years) and if somatic EPHA2 coding SNVs were associated with lens aging and/or cataract in a post-mortem lens DNA panel (> 48 years). Micro-fluidic PCR amplification followed by targeted amplicon (exon) next-generation (deep) sequencing of EPHA2 (17-exons) afforded high read-depth coverage (1000x) for > 82% of reads in the cataract case-control panel (161 cases, 64 controls) and > 70% of reads in the post-mortem lens panel (35 clear lens pairs, 22 cataract lens pairs). Novel and reference (known) missense SNVs in EPHA2 that were predicted in silico to be functionally damaging were found in both cases and controls from the age-related cataract panel at variant allele frequencies (VAFs) consistent with germ-line transmission (VAF > 20%). Similarly, both novel and reference missense SNVs in EPHA2 were found in the post-mortem lens panel at VAFs consistent with a somatic origin (VAF > 3%). The majority of SNVs found in the cataract case-control panel and post-mortem lens panel were transitions and many occurred at di-pyrimidine sites that are susceptible to ultraviolet (UV) radiation induced mutation. These data suggest that novel germ-line (blood) and somatic (lens) coding SNVs in EPHA2 that are predicted to be functionally deleterious occur in adults over 50 years of age. However, both types of EPHA2 coding variants were present at comparable levels in individuals with or without age-related cataract making simple genotype-phenotype correlations inconclusive.
Germ-line and somatic EPHA2 coding variants in lens aging and cataract
Bennett, Thomas M.; M’Hamdi, Oussama; Hejtmancik, J. Fielding
2017-01-01
Rare germ-line mutations in the coding regions of the human EPHA2 gene (EPHA2) have been associated with inherited forms of pediatric cataract, whereas, frequent, non-coding, single nucleotide variants (SNVs) have been associated with age-related cataract. Here we sought to determine if germ-line EPHA2 coding SNVs were associated with age-related cataract in a case-control DNA panel (> 50 years) and if somatic EPHA2 coding SNVs were associated with lens aging and/or cataract in a post-mortem lens DNA panel (> 48 years). Micro-fluidic PCR amplification followed by targeted amplicon (exon) next-generation (deep) sequencing of EPHA2 (17-exons) afforded high read-depth coverage (1000x) for > 82% of reads in the cataract case-control panel (161 cases, 64 controls) and > 70% of reads in the post-mortem lens panel (35 clear lens pairs, 22 cataract lens pairs). Novel and reference (known) missense SNVs in EPHA2 that were predicted in silico to be functionally damaging were found in both cases and controls from the age-related cataract panel at variant allele frequencies (VAFs) consistent with germ-line transmission (VAF > 20%). Similarly, both novel and reference missense SNVs in EPHA2 were found in the post-mortem lens panel at VAFs consistent with a somatic origin (VAF > 3%). The majority of SNVs found in the cataract case-control panel and post-mortem lens panel were transitions and many occurred at di-pyrimidine sites that are susceptible to ultraviolet (UV) radiation induced mutation. These data suggest that novel germ-line (blood) and somatic (lens) coding SNVs in EPHA2 that are predicted to be functionally deleterious occur in adults over 50 years of age. However, both types of EPHA2 coding variants were present at comparable levels in individuals with or without age-related cataract making simple genotype-phenotype correlations inconclusive. PMID:29267365
Alview: Portable Software for Viewing Sequence Reads in BAM Formatted Files.
Finney, Richard P; Chen, Qing-Rong; Nguyen, Cu V; Hsu, Chih Hao; Yan, Chunhua; Hu, Ying; Abawi, Massih; Bian, Xiaopeng; Meerzaman, Daoud M
2015-01-01
The name Alview is a contraction of the term Alignment Viewer. Alview is a compiled to native architecture software tool for visualizing the alignment of sequencing data. Inputs are files of short-read sequences aligned to a reference genome in the SAM/BAM format and files containing reference genome data. Outputs are visualizations of these aligned short reads. Alview is written in portable C with optional graphical user interface (GUI) code written in C, C++, and Objective-C. The application can run in three different ways: as a web server, as a command line tool, or as a native, GUI program. Alview is compatible with Microsoft Windows, Linux, and Apple OS X. It is available as a web demo at https://cgwb.nci.nih.gov/cgi-bin/alview. The source code and Windows/Mac/Linux executables are available via https://github.com/NCIP/alview.
NASA Technical Reports Server (NTRS)
1990-01-01
Lunar base projects, including a reconfigurable lunar cargo launcher, a thermal and micrometeorite protection system, a versatile lifting machine with robotic capabilities, a cargo transport system, the design of a road construction system for a lunar base, and the design of a device for removing lunar dust from material surfaces, are discussed. The emphasis on the Gulf of Mexico project was on the development of a computer simulation model for predicting vessel station keeping requirements. An existing code, used in predicting station keeping requirements for oil drilling platforms operating in North Shore (Alaska) waters was used as a basis for the computer simulation. Modifications were made to the existing code. The input into the model consists of satellite altimeter readings and water velocity readings from buoys stationed in the Gulf of Mexico. The satellite data consists of altimeter readings (wave height) taken during the spring of 1989. The simulation model predicts water velocity and direction, and wind velocity.
Research on pre-processing of QR Code
NASA Astrophysics Data System (ADS)
Sun, Haixing; Xia, Haojie; Dong, Ning
2013-10-01
QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.
Electroencephalography: Subdural Multi-Electrode Brain Chip.
1995-12-01
showing a blind subject reading Braille letters that had been inserted into his visual cortex by stimulating appropriate sets of electrodes. The...subject in Dobelle’s experiment 50 had been blind for 10 years and was able to read Braille at 30 letters a minute using a 64 electrode array for...Evans, "’ Braille ’ Reading by a Blind Volunteer by Visual cortex Stimulation," Nature, 259: 111-112 (January 1976). A.K. Engel, et al. "Temporal Coding
NASA Astrophysics Data System (ADS)
Kondo, Yoshihisa; Yomo, Hiroyuki; Yamaguchi, Shinji; Davis, Peter; Miura, Ryu; Obana, Sadao; Sampei, Seiichi
This paper proposes multipoint-to-multipoint (MPtoMP) real-time broadcast transmission using network coding for ad-hoc networks like video game networks. We aim to achieve highly reliable MPtoMP broadcasting using IEEE 802.11 media access control (MAC) that does not include a retransmission mechanism. When each node detects packets from the other nodes in a sequence, the correctly detected packets are network-encoded, and the encoded packet is broadcasted in the next sequence as a piggy-back for its native packet. To prevent increase of overhead in each packet due to piggy-back packet transmission, network coding vector for each node is exchanged between all nodes in the negotiation phase. Each user keeps using the same coding vector generated in the negotiation phase, and only coding information that represents which user signal is included in the network coding process is transmitted along with the piggy-back packet. Our simulation results show that the proposed method can provide higher reliability than other schemes using multi point relay (MPR) or redundant transmissions such as forward error correction (FEC). We also implement the proposed method in a wireless testbed, and show that the proposed method achieves high reliability in a real-world environment with a practical degree of complexity when installed on current wireless devices.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-29
... read ``\\a\\0.05''. On the same page, in the same table, in the same column, in the fifth row, ``3.0%'' should read ``3.01''. [FR Doc. C1-2011-6566 Filed 3-28-11; 8:45 am] BILLING CODE 1505-01-D ...
Implementing Intensive Vocabulary Instruction for Students at Risk for Reading Disability
ERIC Educational Resources Information Center
Pullen, Paige C.; Tuckwiller, Elizabeth D.; Ashworth, Kristen; Lovelace, Shelly P.; Cash, Deanna
2011-01-01
Concerns regarding literacy levels in the United States are long standing. Debates have existed for decades regarding the most effective ways to teach reading, especially the polarizing dilemma of how much to focus on decoding versus code-emphasis and whole language instruction. Fortunately, as a result of concentrated research efforts and…
ERIC Educational Resources Information Center
Jozwik, Sara L.; Douglas, Karen H.
2017-01-01
This study integrated technology tools into a reading comprehension intervention that used explicit instruction to teach strategies (i.e., asking questions, making connections, and coding the text to monitor for meaning) to mixed-ability small groups, which included four English Learners with learning disabilities in a fourth-grade general…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-12
... environmental impact statement or environmental assessment need be prepared for these amendments. If the... (ADAMS) Public Electronic Reading Room on the internet at the NRC Web site, http://www.nrc.gov/reading-rm... Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code, Section XI as the source of...
Reading Comprehension Is Embodied: Theoretical and Practical Considerations
ERIC Educational Resources Information Center
Sadoski, Mark
2018-01-01
In this review, I advance the embodied cognition movement in cognitive psychology as both a challenge and an invitation for the study of reading comprehension. Embodied cognition challenges theories which assume that mental operations are based in a common, abstract, amodal code of propositions and schemata. Based on growing research in behavioral…
2005-03-01
codes speed up consumer shopping, package shipping, and inventory tracking. RFID offers many advantages over bar codes, as the table below shows...sunlight” (Accenture, 2001, p. 4). Finally, one of the most significant advantages of RFID is the advent of anti-collision. Anti-collision allows an...RFID reader to read and/or write to multiple tags at one time, which is not possible for bar codes. Despite the many advantages RFID over bar codes
A Source Anonymity-Based Lightweight Secure AODV Protocol for Fog-Based MANET
Fang, Weidong; Zhang, Wuxiong; Xiao, Jinchao; Yang, Yang; Chen, Wei
2017-01-01
Fog-based MANET (Mobile Ad hoc networks) is a novel paradigm of a mobile ad hoc network with the advantages of both mobility and fog computing. Meanwhile, as traditional routing protocol, ad hoc on-demand distance vector (AODV) routing protocol has been applied widely in fog-based MANET. Currently, how to improve the transmission performance and enhance security are the two major aspects in AODV’s research field. However, the researches on joint energy efficiency and security seem to be seldom considered. In this paper, we propose a source anonymity-based lightweight secure AODV (SAL-SAODV) routing protocol to meet the above requirements. In SAL-SAODV protocol, source anonymous and secure transmitting schemes are proposed and applied. The scheme involves the following three parts: the source anonymity algorithm is employed to achieve the source node, without being tracked and located; the improved secure scheme based on the polynomial of CRC-4 is applied to substitute the RSA digital signature of SAODV and guarantee the data integrity, in addition to reducing the computation and energy consumption; the random delayed transmitting scheme (RDTM) is implemented to separate the check code and transmitted data, and achieve tamper-proof results. The simulation results show that the comprehensive performance of the proposed SAL-SAODV is a trade-off of the transmission performance, energy efficiency, and security, and better than AODV and SAODV. PMID:28629142
A Source Anonymity-Based Lightweight Secure AODV Protocol for Fog-Based MANET.
Fang, Weidong; Zhang, Wuxiong; Xiao, Jinchao; Yang, Yang; Chen, Wei
2017-06-17
Fog-based MANET (Mobile Ad hoc networks) is a novel paradigm of a mobile ad hoc network with the advantages of both mobility and fog computing. Meanwhile, as traditional routing protocol, ad hoc on-demand distance vector (AODV) routing protocol has been applied widely in fog-based MANET. Currently, how to improve the transmission performance and enhance security are the two major aspects in AODV's research field. However, the researches on joint energy efficiency and security seem to be seldom considered. In this paper, we propose a source anonymity-based lightweight secure AODV (SAL-SAODV) routing protocol to meet the above requirements. In SAL-SAODV protocol, source anonymous and secure transmitting schemes are proposed and applied. The scheme involves the following three parts: the source anonymity algorithm is employed to achieve the source node, without being tracked and located; the improved secure scheme based on the polynomial of CRC-4 is applied to substitute the RSA digital signature of SAODV and guarantee the data integrity, in addition to reducing the computation and energy consumption; the random delayed transmitting scheme (RDTM) is implemented to separate the check code and transmitted data, and achieve tamper-proof results. The simulation results show that the comprehensive performance of the proposed SAL-SAODV is a trade-off of the transmission performance, energy efficiency, and security, and better than AODV and SAODV.
NASA Technical Reports Server (NTRS)
Johnson, Sherylene (Compiler); Bertelrud, Arild (Compiler); Anders, J. B. (Technical Monitor)
2002-01-01
This report is part of a series of reports describing a flow physics high-lift experiment conducted in NASA Langley Research Center's Low-Turbulence Pressure Tunnel (LTPT) in 1996. The anemometry system used in the experiment was originally designed for and used in flight tests with NASA's Boeing 737 airplane. Information that may be useful in the evaluation or use of the experimental data has been compiled. The report also contains details regarding record structure, how to read the embedded time code, as well as the output file formats used in the code reading the binary data.
Roberts, Daniel J; Woollams, Anna M; Kim, Esther; Beeson, Pelagie M; Rapcsak, Steven Z; Lambon Ralph, Matthew A
2013-11-01
Recent visual neuroscience investigations suggest that ventral occipito-temporal cortex is retinotopically organized, with high acuity foveal input projecting primarily to the posterior fusiform gyrus (pFG), making this region crucial for coding high spatial frequency information. Because high spatial frequencies are critical for fine-grained visual discrimination, we hypothesized that damage to the left pFG should have an adverse effect not only on efficient reading, as observed in pure alexia, but also on the processing of complex non-orthographic visual stimuli. Consistent with this hypothesis, we obtained evidence that a large case series (n = 20) of patients with lesions centered on left pFG: 1) Exhibited reduced sensitivity to high spatial frequencies; 2) demonstrated prolonged response latencies both in reading (pure alexia) and object naming; and 3) were especially sensitive to visual complexity and similarity when discriminating between novel visual patterns. These results suggest that the patients' dual reading and non-orthographic recognition impairments have a common underlying mechanism and reflect the loss of high spatial frequency visual information normally coded in the left pFG.
Effects of irrelevant sounds on phonological coding in reading comprehension and short-term memory.
Boyle, R; Coltheart, V
1996-05-01
The effects of irrelevant sounds on reading comprehension and short-term memory were studied in two experiments. In Experiment 1, adults judged the acceptability of written sentences during irrelevant speech, accompanied and unaccompanied singing, instrumental music, and in silence. Sentences varied in syntactic complexity: Simple sentences contained a right-branching relative clause (The applause pleased the woman that gave the speech) and syntactically complex sentences included a centre-embedded relative clause (The hay that the farmer stored fed the hungry animals). Unacceptable sentences either sounded acceptable (The dog chased the cat that eight up all his food) or did not (The man praised the child that sight up his spinach). Decision accuracy was impaired by syntactic complexity but not by irrelevant sounds. Phonological coding was indicated by increased errors on unacceptable sentences that sounded correct. These errors rates were unaffected by irrelevant sounds. Experiment 2 examined effects of irrelevant sounds on ordered recall of phonologically similar and dissimilar word lists. Phonological similarity impaired recall. Irrelevant speech reduced recall but did not interact with phonological similarity. The results of these experiments question assumptions about the relationship between speech input and phonological coding in reading and the short-term store.
Olier, Ivan; Springate, David A.; Ashcroft, Darren M.; Doran, Tim; Reeves, David; Planner, Claire; Reilly, Siobhan; Kontopantelis, Evangelos
2016-01-01
Background The use of Electronic Health Records databases for medical research has become mainstream. In the UK, increasing use of Primary Care Databases is largely driven by almost complete computerisation and uniform standards within the National Health Service. Electronic Health Records research often begins with the development of a list of clinical codes with which to identify cases with a specific condition. We present a methodology and accompanying Stata and R commands (pcdsearch/Rpcdsearch) to help researchers in this task. We present severe mental illness as an example. Methods We used the Clinical Practice Research Datalink, a UK Primary Care Database in which clinical information is largely organised using Read codes, a hierarchical clinical coding system. Pcdsearch is used to identify potentially relevant clinical codes and/or product codes from word-stubs and code-stubs suggested by clinicians. The returned code-lists are reviewed and codes relevant to the condition of interest are selected. The final code-list is then used to identify patients. Results We identified 270 Read codes linked to SMI and used them to identify cases in the database. We observed that our approach identified cases that would have been missed with a simpler approach using SMI registers defined within the UK Quality and Outcomes Framework. Conclusion We described a framework for researchers of Electronic Health Records databases, for identifying patients with a particular condition or matching certain clinical criteria. The method is invariant to coding system or database and can be used with SNOMED CT, ICD or other medical classification code-lists. PMID:26918439
Cross-Cultural Communication: Contrasting Perspectives, Conflicting Sensibilities.
ERIC Educational Resources Information Center
Kochman, Thomas
People fail to communicate because they fail to read accurately the cultural signs that each person is sending. This consistently produces bewilderment, and often feelings of anger, frustration, and pain. Communication becomes virtually impossible when people not only operate from different cultural codes, but are unaware that different codes are…
Rep. Smith, Lamar [R-TX-21
2012-07-09
Senate - 09/12/2012 Received in the Senate and Read twice and referred to the Committee on the Judiciary. (All Actions) Tracker: This bill has the status Passed HouseHere are the steps for Status of Legislation:
Predictors of Quality Verbal Engagement in Third-Grade Literature Discussions
ERIC Educational Resources Information Center
Young, Chase
2014-01-01
This study investigates how reading ability and personality traits predict the quality of verbal discussions in peer-led literature circles. Third grade literature discussions were recorded, transcribed, and coded. The coded statements and questions were quantified into a quality of engagement score. Through multiple linear regression, the…
76 FR 57982 - Building Energy Codes Cost Analysis
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-19
... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy [Docket No. EERE-2011-BT-BC-0046] Building Energy Codes Cost Analysis Correction In notice document 2011-23236 beginning on page... heading ``Table 1. Cash flow components'' should read ``Table 7. Cash flow components''. [FR Doc. C1-2011...
ERIC Educational Resources Information Center
De Nigris, Rosemarie Previti
2017-01-01
The hypothesis of the study was explicit gradual release of responsibility comprehension instruction (GRR) (Pearson & Gallagher, 1983; Fisher & Frey, 2008) with the researcher-created Story Grammar Code (SGC) strategy would significantly increase third graders' comprehension of narrative fiction and nonfiction text. SGC comprehension…
Phonological Coding in Good and Poor Readers.
ERIC Educational Resources Information Center
Briggs, Pamela; Underwood, Geoffrey
1982-01-01
A set of four experiments investigates the relationship between phonological coding and reading ability, using a picture-word interference task and a decoding task. Results with regard to both adults and children suggest that while poor readers possess weak decoding skills, good and poor readers show equivalent evidence of direct semantic and…
Bioinformatic analysis suggests that the Orbivirus VP6 cistron encodes an overlapping gene
Firth, Andrew E
2008-01-01
Background The genus Orbivirus includes several species that infect livestock – including Bluetongue virus (BTV) and African horse sickness virus (AHSV). These viruses have linear dsRNA genomes divided into ten segments, all of which have previously been assumed to be monocistronic. Results Bioinformatic evidence is presented for a short overlapping coding sequence (CDS) in the Orbivirus genome segment 9, overlapping the VP6 cistron in the +1 reading frame. In BTV, a 77–79 codon AUG-initiated open reading frame (hereafter ORFX) is present in all 48 segment 9 sequences analysed. The pattern of base variations across the 48-sequence alignment indicates that ORFX is subject to functional constraints at the amino acid level (even when the constraints due to coding in the overlapping VP6 reading frame are taken into account; MLOGD software). In fact the translated ORFX shows greater amino acid conservation than the overlapping region of VP6. The ORFX AUG codon has a strong Kozak context in all 48 sequences. Each has only one or two upstream AUG codons, always in the VP6 reading frame, and (with a single exception) always with weak or medium Kozak context. Thus, in BTV, ORFX may be translated via leaky scanning. A long (83–169 codon) ORF is present in a corresponding location and reading frame in all other Orbivirus species analysed except Saint Croix River virus (SCRV; the most divergent). Again, the pattern of base variations across sequence alignments indicates multiple coding in the VP6 and ORFX reading frames. Conclusion At ~9.5 kDa, the putative ORFX product in BTV is too small to appear on most published protein gels. Nonetheless, a review of past literature reveals a number of possible detections. We hope that presentation of this bioinformatic analysis will stimulate an attempt to experimentally verify the expression and functional role of ORFX, and hence lead to a greater understanding of the molecular biology of these important pathogens. PMID:18489030
ERIC Educational Resources Information Center
Haugh, Erin Kathleen
2017-01-01
The purpose of this study was to examine the role orthographic coding might play in distinguishing between membership in groups of language-based disability types. The sample consisted of 36 second and third-grade subjects who were administered the PAL-II Receptive Coding and Word Choice Accuracy subtest as a measure of orthographic coding…
A framework for streamlining research workflow in neuroscience and psychology
Kubilius, Jonas
2014-01-01
Successful accumulation of knowledge is critically dependent on the ability to verify and replicate every part of scientific conduct. However, such principles are difficult to enact when researchers continue to resort on ad-hoc workflows and with poorly maintained code base. In this paper I examine the needs of neuroscience and psychology community, and introduce psychopy_ext, a unifying framework that seamlessly integrates popular experiment building, analysis and manuscript preparation tools by choosing reasonable defaults and implementing relatively rigid patterns of workflow. This structure allows for automation of multiple tasks, such as generated user interfaces, unit testing, control analyses of stimuli, single-command access to descriptive statistics, and publication quality plotting. Taken together, psychopy_ext opens an exciting possibility for a faster, more robust code development and collaboration for researchers. PMID:24478691
Small wins big: analytic pinyin skills promote Chinese word reading.
Lin, Dan; McBride-Chang, Catherine; Shu, Hua; Zhang, Yuping; Li, Hong; Zhang, Juan; Aram, Dorit; Levin, Iris
2010-08-01
The present study examined invented spelling of pinyin (a phonological coding system for teaching and learning Chinese words) in relation to subsequent Chinese reading development. Among 296 Chinese kindergartners in Beijing, independent invented pinyin spelling was found to be uniquely predictive of Chinese word reading 12 months later, even with Time 1 syllable deletion, phoneme deletion, and letter knowledge, in addition to the autoregressive effects of Time 1 Chinese word reading, statistically controlled. These results underscore the importance of children's early pinyin representations for Chinese reading acquisition, both theoretically and practically. The findings further support the idea of a universal phonological principle and indicate that pinyin is potentially an ideal measure of phonological awareness in Chinese.
ISRNA: an integrative online toolkit for short reads from high-throughput sequencing data.
Luo, Guan-Zheng; Yang, Wei; Ma, Ying-Ke; Wang, Xiu-Jie
2014-02-01
Integrative Short Reads NAvigator (ISRNA) is an online toolkit for analyzing high-throughput small RNA sequencing data. Besides the high-speed genome mapping function, ISRNA provides statistics for genomic location, length distribution and nucleotide composition bias analysis of sequence reads. Number of reads mapped to known microRNAs and other classes of short non-coding RNAs, coverage of short reads on genes, expression abundance of sequence reads as well as some other analysis functions are also supported. The versatile search functions enable users to select sequence reads according to their sub-sequences, expression abundance, genomic location, relationship to genes, etc. A specialized genome browser is integrated to visualize the genomic distribution of short reads. ISRNA also supports management and comparison among multiple datasets. ISRNA is implemented in Java/C++/Perl/MySQL and can be freely accessed at http://omicslab.genetics.ac.cn/ISRNA/.
Construction safety program for the National Ignition Facility, July 30, 1999 (NIF-0001374-OC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benjamin, D W
1999-07-30
These rules apply to all LLNL employees, non-LLNL employees (including contract labor, supplemental labor, vendors, personnel matrixed/assigned from other National Laboratories, participating guests, visitors and students) and contractors/subcontractors. The General Rules-Code of Safe Practices shall be used by management to promote accident prevention through indoctrination, safety and health training and on-the-job application. As a condition for contracts award, all contractors and subcontractors and their employees must certify on Form S and H A-l that they have read and understand, or have been briefed and understand, the National Ignition Facility OCIP Project General Rules-Code of Safe Practices. (An interpreter must briefmore » those employees who do not speak or read English fluently.) In addition, all contractors and subcontractors shall adopt a written General Rules-Code of Safe Practices that relates to their operations. The General Rules-Code of Safe Practices must be posted at a conspicuous location at the job site office or be provided to each supervisory employee who shall have it readily available. Copies of the General Rules-Code of Safe Practices can also be included in employee safety pamphlets.« less
Lonigan, Christopher J.; Purpura, David J.; Wilson, Shauna B.; Walker, Patricia M.; Clancy-Menchetti, Jeanine
2013-01-01
Many preschool children are at risk for reading problems because of inadequate emergent literacy skills. Evidence supports the effectiveness of interventions to promote these skills, but questions remain about which intervention components work and whether combining intervention components will result in larger gains. In this study, 324 preschoolers (mean age = 54.32 months, SD = 5.88) from low-income backgrounds (46% girls and 54% boys; 82% African American, 14% White, and 4% other) were randomized to combinations of meaning-focused (dialogic reading or shared reading) and code-focused (phonological awareness, letter knowledge, or both) interventions or a control group. Interventions had statistically significant positive impacts only on measures of their respective skill domains. Combinations of interventions did not enhance outcomes across domains, indicating instructional needs in all areas of weakness for young children at risk for later reading difficulties. Less time for each intervention in the combined phonological awareness and letter knowledge intervention conditions, however, did not result in reduced effects relative to nearly twice as much time for each intervention when children received either only the phonological awareness intervention or only the letter knowledge intervention. This finding suggests that a relatively compact code-focused intervention can address the needs of children with weaknesses in both domains. PMID:23073367
Michel, Christian J.
2017-01-01
In 1996, a set X of 20 trinucleotides was identified in genes of both prokaryotes and eukaryotes which has on average the highest occurrence in reading frame compared to its two shifted frames. Furthermore, this set X has an interesting mathematical property as X is a maximal C3 self-complementary trinucleotide circular code. In 2015, by quantifying the inspection approach used in 1996, the circular code X was confirmed in the genes of bacteria and eukaryotes and was also identified in the genes of plasmids and viruses. The method was based on the preferential occurrence of trinucleotides among the three frames at the gene population level. We extend here this definition at the gene level. This new statistical approach considers all the genes, i.e., of large and small lengths, with the same weight for searching the circular code X. As a consequence, the concept of circular code, in particular the reading frame retrieval, is directly associated to each gene. At the gene level, the circular code X is strengthened in the genes of bacteria, eukaryotes, plasmids, and viruses, and is now also identified in the genes of archaea. The genes of mitochondria and chloroplasts contain a subset of the circular code X. Finally, by studying viral genes, the circular code X was found in DNA genomes, RNA genomes, double-stranded genomes, and single-stranded genomes. PMID:28420220
Enhancing L2 Vocabulary Acquisition through Implicit Reading Support Cues in E-books
ERIC Educational Resources Information Center
Liu, Yeu-Ting; Leveridge, Aubrey Neil
2017-01-01
Various explicit reading support cues, such as gloss, QR codes and hypertext annotation, have been embedded in e-books designed specifically for fostering various aspects of language development. However, explicit visual cues are not always reliably perceived as salient or effective by language learners. The current study explored the efficacy of…
Six Sensational Dots: Braille Literacy for Sighted Classmates
ERIC Educational Resources Information Center
Swenson, Anna M.; Cozart, Nancy
2010-01-01
From the moment sighted children see their first dot, teachers find that they are fascinated by the braille code. If they are fortunate enough to have a classmate who reads braille, they have daily opportunities to observe braille used for a variety of purposes, from reading chapter books to solving problems with tactile graphics. Teachers of…
A Case Study of Reading Instruction in a Philippine Classroom
ERIC Educational Resources Information Center
Protacio, Maria Selena; Sarroub, Loukia K.
2013-01-01
In this article, we describe the reading practices in a public and high-achieving 6th grade English classroom in the Philippines. By utilizing a four resources model, we discuss the different roles that students assume in this classroom. Students in this class are mainly code breakers and text users and have limited opportunities to assume the…
ERIC Educational Resources Information Center
Kim, Young-Suk; Al Otaiba, Stephanie; Puranik, Cynthia; Folsom, Jessica Sidler; Gruelich, Luana
2014-01-01
In the present study we examined the relation between alphabet knowledge fluency (letter names and sounds) and letter writing automaticity, and unique relations of letter writing automaticity and semantic knowledge (i.e., vocabulary) to word reading and spelling over and above code-related skills such as phonological awareness and alphabet…
Phonological Working Memory in German Children with Poor Reading and Spelling Abilities
ERIC Educational Resources Information Center
Steinbrink, Claudia; Klatte, Maria
2008-01-01
Deficits in verbal short-term memory have been identified as one factor underlying reading and spelling disorders. However, the nature of this deficit is still unclear. It has been proposed that poor readers make less use of phonological coding, especially if the task can be solved through visual strategies. In the framework of Baddeley's…
Semiology and a Semiological Reading of Power Myths in Education
ERIC Educational Resources Information Center
Kükürt, Remzi Onur
2016-01-01
By referring to the theory of semiology, this study aims to present how certain phrases, applications, images, and objects, which are assumed to be unnoticed in the educational process as if they were natural, could be read as signs encrypted with certain ideologically-loaded cultural codes, and to propose semiology as a method for educational…
Filling the Silence: Reactivation, not Reconstruction
Paape, Dario L. J. F.
2016-01-01
In a self-paced reading experiment, we investigated the processing of sluicing constructions (“sluices”) whose antecedent contained a known garden-path structure in German. Results showed decreased processing times for sluices with garden-path antecedents as well as a disadvantage for antecedents with non-canonical word order downstream from the ellipsis site. A post-hoc analysis showed the garden-path advantage also to be present in the region right before the ellipsis site. While no existing account of ellipsis processing explicitly predicted the results, we argue that they are best captured by combining a local antecedent mismatch effect with memory trace reactivation through reanalysis. PMID:26858674
SIM_ADJUST -- A computer code that adjusts simulated equivalents for observations or predictions
Poeter, Eileen P.; Hill, Mary C.
2008-01-01
This report documents the SIM_ADJUST computer code. SIM_ADJUST surmounts an obstacle that is sometimes encountered when using universal model analysis computer codes such as UCODE_2005 (Poeter and others, 2005), PEST (Doherty, 2004), and OSTRICH (Matott, 2005; Fredrick and others (2007). These codes often read simulated equivalents from a list in a file produced by a process model such as MODFLOW that represents a system of interest. At times values needed by the universal code are missing or assigned default values because the process model could not produce a useful solution. SIM_ADJUST can be used to (1) read a file that lists expected observation or prediction names and possible alternatives for the simulated values; (2) read a file produced by a process model that contains space or tab delimited columns, including a column of simulated values and a column of related observation or prediction names; (3) identify observations or predictions that have been omitted or assigned a default value by the process model; and (4) produce an adjusted file that contains a column of simulated values and a column of associated observation or prediction names. The user may provide alternatives that are constant values or that are alternative simulated values. The user may also provide a sequence of alternatives. For example, the heads from a series of cells may be specified to ensure that a meaningful value is available to compare with an observation located in a cell that may become dry. SIM_ADJUST is constructed using modules from the JUPITER API, and is intended for use on any computer operating system. SIM_ADJUST consists of algorithms programmed in Fortran90, which efficiently performs numerical calculations.
Hyperbolic and semi-hyperbolic surface codes for quantum storage
NASA Astrophysics Data System (ADS)
Breuckmann, Nikolas P.; Vuillot, Christophe; Campbell, Earl; Krishna, Anirudh; Terhal, Barbara M.
2017-09-01
We show how a hyperbolic surface code could be used for overhead-efficient quantum storage. We give numerical evidence for a noise threshold of 1.3 % for the \\{4,5\\}-hyperbolic surface code in a phenomenological noise model (as compared with 2.9 % for the toric code). In this code family, parity checks are of weight 4 and 5, while each qubit participates in four different parity checks. We introduce a family of semi-hyperbolic codes that interpolate between the toric code and the \\{4,5\\}-hyperbolic surface code in terms of encoding rate and threshold. We show how these hyperbolic codes outperform the toric code in terms of qubit overhead for a target logical error probability. We show how Dehn twists and lattice code surgery can be used to read and write individual qubits to this quantum storage medium.
A Critique of Schema Theory in Reading and a Dual Coding Alternative (Commentary).
ERIC Educational Resources Information Center
Sadoski, Mark; And Others
1991-01-01
Evaluates schema theory and presents dual coding theory as a theoretical alternative. Argues that schema theory is encumbered by lack of a consistent definition, its roots in idealist epistemology, and mixed empirical support. Argues that results of many empirical studies used to demonstrate the existence of schemata are more consistently…
Enhancing Nursing and Midwifery Student Learning Through the Use of QR Codes.
Downer, Terri; Oprescu, Florin; Forbes, Helen; Phillips, Nikki; McTier, Lauren; Lord, Bill; Barr, Nigel; Bright, Peter; Simbag, Vilma
A recent teaching and learning innovation using new technologies involves the use of quick response codes, which are read by smartphones and tablets. Integrating this technology as a teaching and learning strategy in nursing and midwifery education has been embraced by academics and students at a regional university.
García-Betances, Rebeca I; Huerta, Mónica K
2012-01-01
A comparative review is presented of available technologies suitable for automatic reading of patient identification bracelet tags. Existing technologies' backgrounds, characteristics, advantages and disadvantages, are described in relation to their possible use by public health care centers with budgetary limitations. A comparative assessment is presented of suitable automatic identification systems based on graphic codes, both one- (1D) and two-dimensional (2D), printed on labels, as well as those based on radio frequency identification (RFID) tags. The analysis looks at the tradeoffs of these technologies to provide guidance to hospital administrator looking to deploy patient identification technology. The results suggest that affordable automatic patient identification systems can be easily and inexpensively implemented using 2D code printed on low cost bracelet labels, which can then be read and automatically decoded by ordinary mobile smart phones. Because of mobile smart phones' present versatility and ubiquity, the implantation and operation of 2D code, and especially Quick Response® (QR) Code, technology emerges as a very attractive alternative to automate the patients' identification processes in low-budget situations.
García-Betances, Rebeca I.; Huerta, Mónica K.
2012-01-01
A comparative review is presented of available technologies suitable for automatic reading of patient identification bracelet tags. Existing technologies’ backgrounds, characteristics, advantages and disadvantages, are described in relation to their possible use by public health care centers with budgetary limitations. A comparative assessment is presented of suitable automatic identification systems based on graphic codes, both one- (1D) and two-dimensional (2D), printed on labels, as well as those based on radio frequency identification (RFID) tags. The analysis looks at the tradeoffs of these technologies to provide guidance to hospital administrator looking to deploy patient identification technology. The results suggest that affordable automatic patient identification systems can be easily and inexpensively implemented using 2D code printed on low cost bracelet labels, which can then be read and automatically decoded by ordinary mobile smart phones. Because of mobile smart phones’ present versatility and ubiquity, the implantation and operation of 2D code, and especially Quick Response® (QR) Code, technology emerges as a very attractive alternative to automate the patients’ identification processes in low-budget situations. PMID:23569629
Han, Sangkwon; Bae, Hyung Jong; Kim, Junhoi; Shin, Sunghwan; Choi, Sung-Eun; Lee, Sung Hoon; Kwon, Sunghoon; Park, Wook
2012-11-20
A QR-coded microtaggant for the anti-counterfeiting of drugs is proposed that can provide high capacity and error-correction capability. It is fabricated lithographically in a microfluidic channel with special consideration of the island patterns in the QR Code. The microtaggant is incorporated in the drug capsule ("on-dose authentication") and can be read by a simple smartphone QR Code reader application when removed from the capsule and washed free of drug. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Evaluation of inner-outer space distinction and verbal hallucinations in schizophrenia.
Stephane, Massoud; Kuskowski, Michael; McClannahan, Kate; Surerus, Christa; Nelson, Katie
2010-09-01
Verbal hallucinations could result from attributing one's own inner speech to another. Inner speech is usually experienced in inner space, whereas hallucinations are often experienced in outer space. To clarify this paradox, we investigated schizophrenia patients' ability to distinguish between speech experienced in inner space, and speech experienced in outer space. 32 schizophrenia patients and 26 matched healthy controls underwent a two-stage experiment. First, they read sentences aloud or silently. Afterwards, they were required to distinguish between the sentences read aloud (experienced in outer space), the sentences read silently (experienced in inner space), and new sentences not previously read (no space coding). The sentences were in the first, second, or third person in equal proportions. Linear mixed models were used to investigate the effects of group, sentence location, pronoun, and hallucinations status. Schizophrenia patients were similar to controls in recognition capacity of sentences without space coding. They exhibited both inner-outer and outer-inner space confusion (they confused silently read sentences for sentences read aloud, and vice versa). Patients who experienced hallucinations inside their head were more likely to have outer-inner space bias. For speech generated by one's own brain, schizophrenia patients have bidirectional failure of inner-outer space distinction (inner-outer and outer-inner space biases); this might explain why hallucinations (abnormal inner speech) could be experienced in outer space. Furthermore, the direction of inner-outer space indistinction could determine the spatial location of the experienced hallucinations (inside or outside the head).
Using Noldus Observer XT for research on deaf signers learning to read: an innovative methodology.
Ducharme, Daphne A; Arcand, Isabelle
2009-08-01
Despite years of research on the reading problems of deaf students, we still do not know how deaf signers who read well actually crack the code of print. How connections are made between sign language and written language is still an open question. In this article, we show how the Noldus Observer XT software can be used to conduct an in-depth analysis of the online behavior of deaf readers. First, we examine factors that may have an impact on reading behavior. Then, we describe how we videotaped teachers with their deaf student signers of langue des signes québécoise during a reading task, how we conducted a recall activity to better understand the students' reading behavior, and how we used this innovative software to analyze the taped footage. Finally, we discuss the contribution this type of research can have on the future reading behavior of deaf students.
Rong, Y; Padron, A V; Hagerty, K J; Nelson, N; Chi, S; Keyhani, N O; Katz, J; Datta, S P A; Gomes, C; McLamore, E S
2018-04-30
Impedimetric biosensors for measuring small molecules based on weak/transient interactions between bioreceptors and target analytes are a challenge for detection electronics, particularly in field studies or in the analysis of complex matrices. Protein-ligand binding sensors have enormous potential for biosensing, but achieving accuracy in complex solutions is a major challenge. There is a need for simple post hoc analytical tools that are not computationally expensive, yet provide near real time feedback on data derived from impedance spectra. Here, we show the use of a simple, open source support vector machine learning algorithm for analyzing impedimetric data in lieu of using equivalent circuit analysis. We demonstrate two different protein-based biosensors to show that the tool can be used for various applications. We conclude with a mobile phone-based demonstration focused on the measurement of acetone, an important biomarker related to the onset of diabetic ketoacidosis. In all conditions tested, the open source classifier was capable of performing as well as, or better, than the equivalent circuit analysis for characterizing weak/transient interactions between a model ligand (acetone) and a small chemosensory protein derived from the tsetse fly. In addition, the tool has a low computational requirement, facilitating use for mobile acquisition systems such as mobile phones. The protocol is deployed through Jupyter notebook (an open source computing environment available for mobile phone, tablet or computer use) and the code was written in Python. For each of the applications, we provide step-by-step instructions in English, Spanish, Mandarin and Portuguese to facilitate widespread use. All codes were based on scikit-learn, an open source software machine learning library in the Python language, and were processed in Jupyter notebook, an open-source web application for Python. The tool can easily be integrated with the mobile biosensor equipment for rapid detection, facilitating use by a broad range of impedimetric biosensor users. This post hoc analysis tool can serve as a launchpad for the convergence of nanobiosensors in planetary health monitoring applications based on mobile phone hardware.
Enrichment of Circular Code Motifs in the Genes of the Yeast Saccharomyces cerevisiae.
Michel, Christian J; Ngoune, Viviane Nguefack; Poch, Olivier; Ripp, Raymond; Thompson, Julie D
2017-12-03
A set X of 20 trinucleotides has been found to have the highest average occurrence in the reading frame, compared to the two shifted frames, of genes of bacteria, archaea, eukaryotes, plasmids and viruses. This set X has an interesting mathematical property, since X is a maximal C3 self-complementary trinucleotide circular code. Furthermore, any motif obtained from this circular code X has the capacity to retrieve, maintain and synchronize the original (reading) frame. Since 1996, the theory of circular codes in genes has mainly been developed by analysing the properties of the 20 trinucleotides of X, using combinatorics and statistical approaches. For the first time, we test this theory by analysing the X motifs, i.e., motifs from the circular code X, in the complete genome of the yeast Saccharomyces cerevisiae . Several properties of X motifs are identified by basic statistics (at the frequency level), and evaluated by comparison to R motifs, i.e., random motifs generated from 30 different random codes R. We first show that the frequency of X motifs is significantly greater than that of R motifs in the genome of S. cerevisiae . We then verify that no significant difference is observed between the frequencies of X and R motifs in the non-coding regions of S. cerevisiae , but that the occurrence number of X motifs is significantly higher than R motifs in the genes (protein-coding regions). This property is true for all cardinalities of X motifs (from 4 to 20) and for all 16 chromosomes. We further investigate the distribution of X motifs in the three frames of S. cerevisiae genes and show that they occur more frequently in the reading frame, regardless of their cardinality or their length. Finally, the ratio of X genes, i.e., genes with at least one X motif, to non-X genes, in the set of verified genes is significantly different to that observed in the set of putative or dubious genes with no experimental evidence. These results, taken together, represent the first evidence for a significant enrichment of X motifs in the genes of an extant organism. They raise two hypotheses: the X motifs may be evolutionary relics of the primitive codes used for translation, or they may continue to play a functional role in the complex processes of genome decoding and protein synthesis.
Color visualization of cyclic magnitudes
NASA Astrophysics Data System (ADS)
Restrepo, Alfredo; Estupiñán, Viviana
2014-02-01
We exploit the perceptual, circular ordering of the hues in a technique for the visualization of cyclic variables. The hue is thus meaningfully used for the indication of variables such as the azimuth and the units of the measurement of time. The cyclic (or circular) variables may be both of the continuous type or the discrete type; among the first there is azimuth and among the last you find the musical notes and the days of the week. A correspondence between the values of a cyclic variable and the chromatic hues, where the natural circular ordering of the variable is respected, is called a color code for the variable. We base such a choice of hues on an assignment of of the unique hues red, yellow, green and blue, or one of the 8 even permutations of this ordered list, to 4 cardinal values of the cyclic variable, suitably ordered; color codes based on only 3 cardinal points are also possible. Color codes, being intuitive, are easy to remember. A possible low accuracy when reading instruments that use this technique is compensated by fast, ludic and intuitive readings; also, the use of a referential frame makes readings precise. An achromatic version of the technique, that can be used by dichromatic people, is proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas
In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less
Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas; ...
2016-01-06
In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less
Arnold, B; Lutz, J; Nilges, P; Pfingsten, M; Rief, Winfried; Böger, A; Brinkschmidt, T; Casser, H-R; Irnich, D; Kaiser, U; Klimczyk, K; Sabatowski, R; Schiltenwolf, M; Söllner, W
2017-12-01
In 2009 the diagnosis chronic pain disorder with somatic and psychological factors (F45.41) was integrated into the German version of the International Classification of Diseases, version 10 (ICD-10-GM). In 2010 Paul Nilges and Winfried Rief published operationalization criteria for this diagnosis. In the present publication the ad hoc commission on multimodal interdisciplinary pain therapy of the German Pain Society now presents a formula for a clear validation of these operationalization criteria of the ICD code F45.41.
Road to the Code: Examining the Necessity and Sufficiency of Program Components
ERIC Educational Resources Information Center
Schmitz, Stephanie L.; Loy, Sedona
2014-01-01
As the ability to read proficiently is essential for success both in and out of the school setting, literacy has become an area of particular focus in today's classrooms. While recent assessments indicate that students are making progress in the area of reading (e.g., National Assessment of Educational Progress [NAEP], 2011), there continues to be…
ERIC Educational Resources Information Center
Smith, Adrina O.
2017-01-01
This study examined the latent factors of dialect variation as they relate to reading achievement of second grade students. Sociocultural theory, identity theories, and critical theory used against a metaphorical backdrop of a bundle of locks were used to illustrate the complexity of language variation and its effect on reading achievement within…
Predicting Item Difficulty in a Reading Comprehension Test with an Artificial Neural Network.
ERIC Educational Resources Information Center
Perkins, Kyle; And Others
This paper reports the results of using a three-layer backpropagation artificial neural network to predict item difficulty in a reading comprehension test. Two network structures were developed, one with and one without a sigmoid function in the output processing unit. The data set, which consisted of a table of coded test items and corresponding…
ERIC Educational Resources Information Center
Reynolds, Michael; Besner, Derek
2006-01-01
The present experiments tested the claim that phonological recoding occurs "automatically" by assessing whether it uses central attention in the context of the psychological refractory period paradigm. Task 1 was a tone discrimination task and Task 2 was reading aloud. The joint effects of long-lag word repetition priming and stimulus onset…
Preschool Children's Exposure to Story Grammar Elements during Parent-Child Book Reading
ERIC Educational Resources Information Center
Breit-Smith, Allison; van Kleeck, Anne; Prendeville, Jo-Anne; Pan, Wei
2017-01-01
Twenty-three preschool-age children, 3;6 (years; months) to 4;1, were videotaped separately with their mothers and fathers while each mother and father read a different unfamiliar storybook to them. The text from the unfamiliar storybooks was parsed and coded into story grammar elements and all parental extratextual utterances were transcribed and…
3D video coding: an overview of present and upcoming standards
NASA Astrophysics Data System (ADS)
Merkle, Philipp; Müller, Karsten; Wiegand, Thomas
2010-07-01
An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.
To, Nancy L.; Tighe, Elizabeth L.; Binder, Katherine S.
2015-01-01
For adults with low literacy skills, the role of phonology in reading has been fairly well researched, but less is known about the role of morphology in reading. We investigated the contribution of morphological awareness to word reading and reading comprehension and found that for adults with low literacy skills and skilled readers, morphological awareness explained unique variance in word reading and reading comprehension. In addition, we investigated the effects of orthographic and phonological opacity in morphological processing. Results indicated that adults with low literacy skills were more impaired than skilled readers on items containing phonological changes but were spared on items involving orthographic changes. These results are consistent with previous findings of adults with low literacy skills reliance on orthographic codes. Educational implications are discussed. PMID:27158173
2012-01-01
Background Inter-rater agreement in the interpretation of chest X-ray (CXR) films is crucial for clinical and epidemiological studies of tuberculosis. We compared the readings of CXR films used for a survey of tuberculosis between raters from two Asian countries. Methods Of the 11,624 people enrolled in a prevalence survey in Hanoi, Viet Nam, in 2003, we studied 258 individuals whose CXR films did not exclude the possibility of active tuberculosis. Follow-up films obtained from accessible individuals in 2006 were also analyzed. Two Japanese and two Vietnamese raters read the CXR films based on a coding system proposed by Den Boon et al. and another system newly developed in this study. Inter-rater agreement was evaluated by kappa statistics. Marginal homogeneity was evaluated by the generalized estimating equation (GEE). Results CXR findings suspected of tuberculosis differed between the four raters. The frequencies of infiltrates and fibrosis/scarring detected on the films significantly differed between the raters from the two countries (P < 0.0001 and P = 0.0082, respectively, by GEE). The definition of findings such as primary cavity, used in the coding systems also affected the degree of agreement. Conclusions CXR findings were inconsistent between the raters with different backgrounds. High inter-rater agreement is a component necessary for an optimal CXR coding system, particularly in international studies. An analysis of reading results and a thorough discussion to achieve a consensus would be necessary to achieve further consistency and high quality of reading. PMID:22296612
Kibby, Michelle Y
2009-09-01
Prior research has put forth at least four possible contributors to the verbal short-term memory (VSTM) deficit in children with developmental reading disabilities (RD): poor phonological awareness that affects phonological coding into VSTM, a less effective phonological store, slow articulation rate, and fewer/poorer quality long-term memory (LTM) representations. This project is among the first to test the four suppositions in one study. Participants included 18 children with RD and 18 controls. VSTM was assessed using Baddeley's model of the phonological loop. Findings suggest all four suppositions are correct, depending upon the type of material utilized. Children with RD performed comparably to controls in VSTM for common words but worse for less frequent words and nonwords. Furthermore, only articulation rate predicted VSTM for common words, whereas Verbal IQ and articulation rate predicted VSTM for less frequent words, and phonological awareness and articulation rate predicted VSTM for nonwords. Overall, findings suggest that the mechanism(s) used to code and store items by their meaning is intact in RD, and the deficit in VSTM for less frequent words may be a result of fewer/poorer quality LTM representations for these words. In contrast, phonological awareness and the phonological store are impaired, affecting VSTM for items that are coded phonetically. Slow articulation rate likely affects VSTM for most material when present. When assessing reading performance, VSTM predicted decoding skill but not word identification after controlling Verbal IQ and phonological awareness. Thus, VSTM likely contributes to reading ability when words are novel and must be decoded.
Oidtmann, B; Johnston, C; Klotins, K; Mylrea, G; Van, P T; Cabot, S; Martin, P Rosado; Ababouch, L; Berthe, F
2013-02-01
Trading of aquatic animals and aquatic animal products has become increasingly globalized during the last couple of decades. This commodity trade has increased the risk for the spread of aquatic animal pathogens. The World Organisation for Animal Health (OIE) is recognized as the international standard-setting organization for measures relating to international trade in animals and animal products. In this role, OIE has developed the Aquatic Animal Health Code, which provides health measures to be used by competent authorities of importing and exporting countries to avoid the transfer of agents pathogenic for animals or humans, whilst avoiding unjustified sanitary barriers. An OIE ad hoc group developed criteria for assessing the safety of aquatic animals or aquatic animal products for any purpose from a country, zone or compartment not declared free from a given disease 'X'. The criteria were based on the absence of the pathogenic agent in the traded commodity or inactivation of the pathogenic agent by the commercial processing used to produce the commodity. The group also developed criteria to assess the safety of aquatic animals or aquatic animal products for retail trade for human consumption from potentially infected areas. Such commodities were assessed considering the form and presentation of the product, the expected volume of waste tissues generated by the consumer and the likely presence of viable pathogenic agent in the waste. The ad hoc group applied the criteria to commodities listed in the individual disease chapters of the Aquatic Animal Health Code (2008 edition). Revised lists of commodities for which no additional measures should be required by the importing countries regardless of the status for disease X of the exporting country were developed and adopted by the OIE World Assembly of Delegates in May 2011. The rationale of the criteria and their application will be explained and demonstrated using examples. © 2012 Crown Copyright. Reproduced with the permission of the Controller of Her Majesty’s Stationery Office and Cefas, Aquatic Animal Disease Group.
Final Evaluation of MIPS M/500
1987-11-01
recognizing common subexpressions by changing the code to read: acke (n,m) If (, - 0) return *+I; return a ker(n-1, 0 ? 1 aaker (n,.-1)); I the total code...INSTITUTE JPO PTTTSBURCH. PA 15213 N/A N/A N/O 11 TITLE (Inciude Security Class.iication) Final Evaluation of MIPS M/500 12. PERSONAL AUTHOR(S) Daniel V
Sanders, Elizabeth A.; Berninger, Virginia W.; Abbott, Robert D.
2017-01-01
Sequential regression was used to evaluate whether language-related working memory components uniquely predict reading and writing achievement beyond cognitive-linguistic translation for students in grades 4–9 (N=103) with specific learning disabilities (SLDs) in subword handwriting (dysgraphia, n=25), word reading and spelling (dyslexia, n=60), or oral and written language (OWL LD, n=18). That is, SLDs are defined on basis of cascading level of language impairment (subword, word, and syntax/text). A 5-block regression model sequentially predicted literacy achievement from cognitive-linguistic translation (Block 1); working memory components for word form coding (Block 2), phonological and orthographic loops (Block 3), and supervisory focused or switching attention (Block4); and SLD groups (Block 5). Results showed that cognitive-linguistic translation explained an average of 27% and 15% of the variance in reading and writing achievement, respectively, but working memory components explained an additional 39% and 27% variance. Orthographic word form coding uniquely predicted nearly every measure, whereas attention switching only uniquely predicted reading. Finally, differences in reading and writing persisted between dyslexia and dysgraphia, with dysgraphia higher, even after controlling for Block 1 to 4 predictors. Differences in literacy achievement between students with dyslexia and OWL LD were largely explained by the Block 1 predictors. Applications to identifying and teaching students with these SLDs are discussed. PMID:28199175
NASA Astrophysics Data System (ADS)
Fernandez, Eduardo; Borelli, Noah; Cappelli, Mark; Gascon, Nicolas
2003-10-01
Most current Hall thruster simulation efforts employ either 1D (axial), or 2D (axial and radial) codes. These descriptions crucially depend on the use of an ad-hoc perpendicular electron mobility. Several models for the mobility are typically invoked: classical, Bohm, empirically based, wall-induced, as well as combinations of the above. Experimentally, it is observed that fluctuations and electron transport depend on axial distance and operating parameters. Theoretically, linear stability analyses have predicted a number of unstable modes; yet the nonlinear character of the fluctuations and/or their contribution to electron transport remains poorly understood. Motivated by these observations, a 2D code in the azimuthal and axial coordinates has been written. In particular, the simulation self-consistently calculates the azimuthal disturbances resulting in fluctuating drifts, which in turn (if properly correlated with plasma density disturbances) result in fluctuation-driven electron transport. The characterization of the turbulence at various operating parameters and across the channel length is also the object of this study. A description of the hybrid code used in the simulation as well as the initial results will be presented.
ChromaStarPy: A Stellar Atmosphere and Spectrum Modeling and Visualization Lab in Python
NASA Astrophysics Data System (ADS)
Short, C. Ian; Bayer, Jason H. T.; Burns, Lindsey M.
2018-02-01
We announce ChromaStarPy, an integrated general stellar atmospheric modeling and spectrum synthesis code written entirely in python V. 3. ChromaStarPy is a direct port of the ChromaStarServer (CSServ) Java modeling code described in earlier papers in this series, and many of the associated JavaScript (JS) post-processing procedures have been ported and incorporated into CSPy so that students have access to ready-made data products. A python integrated development environment (IDE) allows a student in a more advanced course to experiment with the code and to graphically visualize intermediate and final results, ad hoc, as they are running it. CSPy allows students and researchers to compare modeled to observed spectra in the same IDE in which they are processing observational data, while having complete control over the stellar parameters affecting the synthetic spectra. We also take the opportunity to describe improvements that have been made to the related codes, ChromaStar (CS), CSServ, and ChromaStarDB (CSDB), that, where relevant, have also been incorporated into CSPy. The application may be found at the home page of the OpenStars project: http://www.ap.smu.ca/OpenStars/.
Extracting the Data From the LCM vk4 Formatted Output File
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
These are slides about extracting the data from the LCM vk4 formatted output file. The following is covered: vk4 file produced by Keyence VK Software, custom analysis, no off the shelf way to read the file, reading the binary data in a vk4 file, various offsets in decimal lines, finding the height image data, directly in MATLAB, binary output beginning of height image data, color image information, color image binary data, color image decimal and binary data, MATLAB code to read vk4 file (choose a file, read the file, compute offsets, read optical image, laser optical image, read and computemore » laser intensity image, read height image, timing, display height image, display laser intensity image, display RGB laser optical images, display RGB optical images, display beginning data and save images to workspace, gamma correction subroutine), reading intensity form the vk4 file, linear in the low range, linear in the high range, gamma correction for vk4 files, computing the gamma intensity correction, observations.« less
Use of biphase-coded pulses for wideband data storage in time-domain optical memories.
Shen, X A; Kachru, R
1993-06-10
We demonstrate that temporally long laser pulses with appropriate phase modulation can replace either temporally brief or frequency-chirped pulses in a time-domain optical memory to store and retrieve information. A 1.65-µs-long write pulse was biphase modulated according to the 13-bit Barker code for storing multiple bits of optical data into a Pr(3+):YAlO(3) crystal, and the stored information was later recalled faithfully by using a read pulse that was identical to the write pulse. Our results further show that the stored data cannot be retrieved faithfully if mismatched write and read pulses are used. This finding opens up the possibility of designing encrypted optical memories for secure data storage.
78 FR 73993 - Bovine Spongiform Encephalopathy; Importation of Bovines and Bovine Products
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-10
... column, in the 16th line from the bottom ``C[squ]N'' should read ``C[Lambda]N''. Sec. 93.418 [Corrected... ``C[Lambda]N''. 3. On the same page, in the third column, in the 1st line ``CN'' should read ``C[Lambda]N''. [FR Doc. C1-2013-28228 Filed 12-9-13; 8:45 am] BILLING CODE 1505-01-D ...
ERIC Educational Resources Information Center
Walma Van Der Molen, Juliette H.; Van Der Voort, Tom H. A.
2000-01-01
Examines three alternative explanations that attribute children's superior recall of television news to (1) underutilization of the print medium; (2) a recall advantage of listening compared with reading; and (3) imperfect reading ability. Finds that the television presentation was remembered better than any of the other versions, consistent with…
Remote Tactile Displays for Future Soldiers
2007-05-01
envisioned as an alternate means of communication for those who are visually 4 or hearing impaired. Oddly, the success of Braille “reading...designers. For example, simply embossing the alphabet in paper to read by touch succumbed to Braille because subtle spatial patterns easily apparent to...fashioned after the six- dot Braille system. Although an alphanumeric code was mastered with surprising rapidity, one striking difficulty emerged
van den Biggelaar, F J H M; Flobbe, K; van Engelshoven, J M A; de Bijl, N P Y M
2009-09-01
This paper focuses on the legal implications in terms of duties and responsibilities for radiologists and radiologic technologists of independent pre-reading of mammograms by radiologic technologists, so patients could be discharged without being seen by a radiologist. Pre-reading could be effectuated when preconditions are met to perform reserved procedures by unauthorised professionals as stated in the Individual Health Care Professions (IHCP) Act. Furthermore, compliance with a protocol or code of conduct in combination with adequate training and supervision should be sufficient to disprove potential claims. For a wide implementation, pre-reading should be well-embedded in legal rules and should answer the professional standard of care.
Hsu, Chun-Ting; Jacobs, Arthur M.; Altmann, Ulrike; Conrad, Markus
2015-01-01
Literature containing supra-natural, or magical events has enchanted generations of readers. When reading narratives describing such events, readers mentally simulate a text world different from the real one. The corresponding violation of world-knowledge during this simulation likely increases cognitive processing demands for ongoing discourse integration, catches readers’ attention, and might thus contribute to the pleasure and deep emotional experience associated with ludic immersive reading. In the present study, we presented participants in an MR scanner with passages selected from the Harry Potter book series, half of which described magical events, while the other half served as control condition. Passages in both conditions were closely matched for relevant psycholinguistic variables including, e.g., emotional valence and arousal, passage-wise mean word imageability and frequency, and syntactic complexity. Post-hoc ratings showed that readers considered supra-natural contents more surprising and more strongly associated with reading pleasure than control passages. In the fMRI data, we found stronger neural activation for the supra-natural than the control condition in bilateral inferior frontal gyri, bilateral inferior parietal lobules, left fusiform gyrus, and left amygdala. The increased activation in the amygdala (part of the salience and emotion processing network) appears to be associated with feelings of surprise and the reading pleasure, which supra-natural events, full of novelty and unexpectedness, brought about. The involvement of bilateral inferior frontal gyri likely reflects higher cognitive processing demand due to world knowledge violations, whereas increased attention to supra-natural events is reflected in inferior frontal gyri and inferior parietal lobules that are part of the fronto-parietal attention network. PMID:25671315
Hsu, Chun-Ting; Jacobs, Arthur M; Altmann, Ulrike; Conrad, Markus
2015-01-01
Literature containing supra-natural, or magical events has enchanted generations of readers. When reading narratives describing such events, readers mentally simulate a text world different from the real one. The corresponding violation of world-knowledge during this simulation likely increases cognitive processing demands for ongoing discourse integration, catches readers' attention, and might thus contribute to the pleasure and deep emotional experience associated with ludic immersive reading. In the present study, we presented participants in an MR scanner with passages selected from the Harry Potter book series, half of which described magical events, while the other half served as control condition. Passages in both conditions were closely matched for relevant psycholinguistic variables including, e.g., emotional valence and arousal, passage-wise mean word imageability and frequency, and syntactic complexity. Post-hoc ratings showed that readers considered supra-natural contents more surprising and more strongly associated with reading pleasure than control passages. In the fMRI data, we found stronger neural activation for the supra-natural than the control condition in bilateral inferior frontal gyri, bilateral inferior parietal lobules, left fusiform gyrus, and left amygdala. The increased activation in the amygdala (part of the salience and emotion processing network) appears to be associated with feelings of surprise and the reading pleasure, which supra-natural events, full of novelty and unexpectedness, brought about. The involvement of bilateral inferior frontal gyri likely reflects higher cognitive processing demand due to world knowledge violations, whereas increased attention to supra-natural events is reflected in inferior frontal gyri and inferior parietal lobules that are part of the fronto-parietal attention network.
Johansson, Magnus; Zhang, Jingji; Ehrenberg, Måns
2012-01-03
Rapid and accurate translation of the genetic code into protein is fundamental to life. Yet due to lack of a suitable assay, little is known about the accuracy-determining parameters and their correlation with translational speed. Here, we develop such an assay, based on Mg(2+) concentration changes, to determine maximal accuracy limits for a complete set of single-mismatch codon-anticodon interactions. We found a simple, linear trade-off between efficiency of cognate codon reading and accuracy of tRNA selection. The maximal accuracy was highest for the second codon position and lowest for the third. The results rationalize the existence of proofreading in code reading and have implications for the understanding of tRNA modifications, as well as of translation error-modulating ribosomal mutations and antibiotics. Finally, the results bridge the gap between in vivo and in vitro translation and allow us to calibrate our test tube conditions to represent the environment inside the living cell.
Artistic production in dyslectic children.
Cohn, R; Neumann, M A
1977-01-01
In the study of children with language problems, particularly in reading and writing, it has been observed that some have an outstanding ability to produce artistic pictures and objects. These productions are perceptive, well organized and generally contain much action. Despite their pictorial skill these patients may have only a rudimentary use of coded symbolic graphic forms. Others display moderate ability in reading and writing. These patients frequently have the disorganized overacctive behavior and the motor clumsiness that is so common in the dyslectic child; some, however, are biologically effective. From this material we entertain the hypothesis that picture (artistic) productions are generated by the sub-dominant cerebral hemisphere, and that this function is quite distinct from the coded graphic operations resident in the dominant hemisphere. If this hypothesis is correct, it would seem socially benefical to allow these patients to develop their unique artistic ability to its full capacity, and not to overemphasize the correction of the disturbed coded symbol operations in remedial training.
Analysis of the Space Propulsion System Problem Using RAVEN
DOE Office of Scientific and Technical Information (OSTI.GOV)
diego mandelli; curtis smith; cristian rabiti
This paper presents the solution of the space propulsion problem using a PRA code currently under development at Idaho National Laboratory (INL). RAVEN (Reactor Analysis and Virtual control ENviroment) is a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities. It is designed to derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures) and to perform both Monte- Carlo sampling of random distributed events and Event Tree based analysis. In order to facilitate the input/output handling, a Graphical User Interface (GUI) and a post-processing data-mining module are available.more » RAVEN allows also to interface with several numerical codes such as RELAP5 and RELAP-7 and ad-hoc system simulators. For the space propulsion system problem, an ad-hoc simulator has been developed and written in python language and then interfaced to RAVEN. Such simulator fully models both deterministic (e.g., system dynamics and interactions between system components) and stochastic behaviors (i.e., failures of components/systems such as distribution lines and thrusters). Stochastic analysis is performed using random sampling based methodologies (i.e., Monte-Carlo). Such analysis is accomplished to determine both the reliability of the space propulsion system and to propagate the uncertainties associated to a specific set of parameters. As also indicated in the scope of the benchmark problem, the results generated by the stochastic analysis are used to generate risk-informed insights such as conditions under witch different strategy can be followed.« less
Wirfält, Elisabet; Mattisson, Irene; Johansson, Ulla; Gullberg, Bo; Wallström, Peter; Berglund, Göran
2002-01-01
Background In the Malmö Diet and Cancer study, information on dietary habits was obtained through a modified diet history method, combining a 7-day menu book for cooked meals and a diet questionnaire for foods with low day-to-day variation. Half way through the baseline data collection, a change of interview routines was implemented in order to reduce interview time. Methods Changes concentrated on portion-size estimation and recipe coding of mixed dishes reported in the menu book. All method development and tests were carefully monitored, based on experiential knowledge, and supplemented with empirical data. A post hoc evaluation study using "real world" data compared observed means of selected dietary variables before and after the alteration of routines handling dietary data, controlling for potential confounders. Results These tests suggested that simplified coding rules and standard portion-sizes could be used on a limited number of foods, without distortions of the group mean nutrient intakes, or the participants' ranking. The post hoc evaluation suggested that mean intakes of energy-adjusted fat were higher after the change in routines. The impact appeared greater in women than in men. Conclusions Future descriptive studies should consider selecting subsets assessed with either method version to avoid distortion of observed mean intakes. The impact in analytical studies may be small, because method version and diet assistant explained less than 1 percent of total variation. The distribution of cases and non-cases across method versions should be monitored. PMID:12537595
Arnold, B; Böger, A; Brinkschmidt, T; Casser, H-R; Irnich, D; Kaiser, U; Klimczyk, K; Lutz, J; Pfingsten, M; Sabatowski, R; Schiltenwolf, M; Söllner, W
2018-02-01
With the implementation of the German diagnosis-related groups (DRG) reimbursement system in hospitals, interdisciplinary multimodal pain therapy was incorporated into the associated catalogue of procedures (OPS 8‑918). Yet, the presented criteria describing the procedure of interdisciplinary multimodal pain therapy are neither precise nor unambiguous. This has led to discrepancies in the interpretation regarding the handling of the procedure-making it difficult for medical services of health insurance companies to evaluate the accordance between the delivered therapy and the required criteria. Since the number of pain units has increased in recent years, the number of examinations by the medical service of health insurance companies has increased. This article, published by the ad hoc commission for interdisciplinary multimodal pain therapy of the German Pain Association, provides specific recommendations for correct implementation of interdisciplinary multimodal pain therapy in routine care. The aim is to achieve a maximum level of accordance between health care providers and the requirements of the medical examiners from health insurance companies. More extensive criteria regarding interdisciplinary multimodal pain treatment in an in-patient setting, especially for patients with chronic and complex pain, are obviously needed. Thus, the authors further discuss specific aspects towards further development of the OPS-code. However, the application of the OPS-code still leaves room regarding treatment intensity and process quality. Therefore, the delivery of pain management in sufficient quantity and quality still remains the responsibility of each health care provider.
Genometa--a fast and accurate classifier for short metagenomic shotgun reads.
Davenport, Colin F; Neugebauer, Jens; Beckmann, Nils; Friedrich, Benedikt; Kameri, Burim; Kokott, Svea; Paetow, Malte; Siekmann, Björn; Wieding-Drewes, Matthias; Wienhöfer, Markus; Wolf, Stefan; Tümmler, Burkhard; Ahlers, Volker; Sprengel, Frauke
2012-01-01
Metagenomic studies use high-throughput sequence data to investigate microbial communities in situ. However, considerable challenges remain in the analysis of these data, particularly with regard to speed and reliable analysis of microbial species as opposed to higher level taxa such as phyla. We here present Genometa, a computationally undemanding graphical user interface program that enables identification of bacterial species and gene content from datasets generated by inexpensive high-throughput short read sequencing technologies. Our approach was first verified on two simulated metagenomic short read datasets, detecting 100% and 94% of the bacterial species included with few false positives or false negatives. Subsequent comparative benchmarking analysis against three popular metagenomic algorithms on an Illumina human gut dataset revealed Genometa to attribute the most reads to bacteria at species level (i.e. including all strains of that species) and demonstrate similar or better accuracy than the other programs. Lastly, speed was demonstrated to be many times that of BLAST due to the use of modern short read aligners. Our method is highly accurate if bacteria in the sample are represented by genomes in the reference sequence but cannot find species absent from the reference. This method is one of the most user-friendly and resource efficient approaches and is thus feasible for rapidly analysing millions of short reads on a personal computer. The Genometa program, a step by step tutorial and Java source code are freely available from http://genomics1.mh-hannover.de/genometa/ and on http://code.google.com/p/genometa/. This program has been tested on Ubuntu Linux and Windows XP/7.
Simulation and analysis of support hardware for multiple instruction rollback
NASA Technical Reports Server (NTRS)
Alewine, Neil J.
1992-01-01
Recently, a compiler-assisted approach to multiple instruction retry was developed. In this scheme, a read buffer of size 2N, where N represents the maximum instruction rollback distance, is used to resolve one type of data hazard. This hardware support helps to reduce code growth, compilation time, and some of the performance impacts associated with hazard resolution. The 2N read buffer size requirement of the compiler-assisted approach is worst case, assuring data redundancy for all data required but also providing some unnecessary redundancy. By adding extra bits in the operand field for source 1 and source 2 it becomes possible to design the read buffer to save only those values required, thus reducing the read buffer size requirement. This study measures the effect on performance of a DECstation 3100 running 10 application programs using 6 read buffer configurations at varying read buffer sizes.
USDA-ARS?s Scientific Manuscript database
Food-grade tracers were printed with two-dimensional Data Matrix (DM) barcode so that they could carry simulated identifying information about grain as part of a prospective traceability system. The key factor in evaluating the tracers was their ability to be read with a code scanner after being rem...
One State, Two State, Red State, Blue State: Education Funding Accounts for Outcome Differences
ERIC Educational Resources Information Center
Meece, Darrell
2008-01-01
Using publically available data, states coded as "blue" based upon results from the 2004 presidential election were significantly higher in education funding than were states coded as "red." Students in blue states scored significantly higher on outcome measures of math and reading in grades four and eight in 2004 and 2007 than did students in red…
Hermeneutic and Cultural Codes of S/Z: A Semiological Reading of James Joyce's "The Boarding House"
ERIC Educational Resources Information Center
Booryazadeh, Seyed Ali; Faghfori, Sohila; Shamsi, Habibe
2014-01-01
Roland Barthes as a fervent proponent of semiology believes that semiology is a branch of a comprehensive linguistics: it is the study of how language articulates the world. Semiotic codes, the paths of this articulation, accordingly underlie his attention. Barthes in a structural analysis of Balzac's "Sarrasine" in S/Z expounds five…
ERIC Educational Resources Information Center
Glazer, Susan Mandel
This concise book shares several sensible, logical, and meaningful approaches that guide young children to use the written coding system to read, spell, and make meaning of the English language coding system. The book demonstrates that phonics, spelling, and word study are essential parts of literacy learning. After an introduction, chapters are:…
Moggy, Melissa; Pajor, Edmond; Thurston, Wilfreda; Parker, Sarah; Greter, Angela; Schwartzkopf-Genswein, Karen; Campbell, John; Windeyer, M Claire
2017-11-01
This study describes western Canadian cow-calf producers' attitudes towards the Code of Practice for the Care and Handling of Beef Cattle (COPB). Most respondents had not read the COPB. Of those familiar with the COPB, most agreed with it, but it did not have a major influence on their decisions.
Cai, Yong; Li, Xiwen; Wang, Runmiao; Yang, Qing; Li, Peng; Hu, Hao
2016-01-01
Currently, the chemical fingerprint comparison and analysis is mainly based on professional equipment and software, it's expensive and inconvenient. This study aims to integrate QR (Quick Response) code with quality data and mobile intelligent technology to develop a convenient query terminal for tracing quality in the whole industrial chain of TCM (traditional Chinese medicine). Three herbal medicines were randomly selected and their chemical two-dimensional barcode (2D) barcodes fingerprints were constructed. Smartphone application (APP) based on Android system was developed to read initial data of 2D chemical barcodes, and compared multiple fingerprints from different batches of same species or different species. It was demonstrated that there were no significant differences between original and scanned TCM chemical fingerprints. Meanwhile, different TCM chemical fingerprint QR codes could be rendered in the same coordinate and showed the differences very intuitively. To be able to distinguish the variations of chemical fingerprint more directly, linear interpolation angle cosine similarity algorithm (LIACSA) was proposed to get similarity ratio. This study showed that QR codes can be used as an effective information carrier to transfer quality data. Smartphone application can rapidly read quality information in QR codes and convert data into TCM chemical fingerprints.
Diehl, V A; Mills, C B
1995-11-01
In two experiments, subjects interacted to different extents with relevant devices while reading two complex multistep procedural texts and were then tested with task performance time, true/false, and recall measures. While reading, subjects performed the task (read and do), saw the experimenter perform the task (read and see experimenter do), imagined doing the task (read and imagine), looked at the device while reading (read and see), or only read (read only). Van Dijk and Kintsch's (1983) text representation theory led to the prediction that exposure to the task device (in the read-and-do, read-and-see, and read-and-see-experimenter-do conditions) would lead to the development of a stronger situation model and therefore faster task performance, whereas the read-only and read-and-see conditions would lead to a better textbase, and therefore better performance on the true/false and recall tasks. Paivio's (1991) dual coding theory led to the opposite prediction for recall. The results supported the text representation theory with task performance and recall. The read-and-see condition produced consistently good performance on the true/false measure. Amount of text study time contributed to recall performance. These findings support the notion that information available while reading leads to differential development of representations in memory, which, in turn, causes differences in performance on various measures.
Reading strategies of Chinese students with severe to profound hearing loss.
Cheung, Ka Yan; Leung, Man Tak; McPherson, Bradley
2013-01-01
The present study investigated the significance of auditory discrimination and the use of phonological and orthographic codes during the course of reading development in Chinese students who are deaf or hard of hearing (D/HH). In this study, the reading behaviors of D/HH students in 2 tasks-a task on auditory perception of onset rime and a synonym decision task-were compared with those of their chronological age-matched and reading level (RL)-matched controls. Cross-group comparison of the performances of participants in the task on auditory perception suggests that poor auditory discrimination ability may be a possible cause of reading problems for D/HH students. In addition, results of the synonym decision task reveal that D/HH students with poor reading ability demonstrate a significantly greater preference for orthographic rather than phonological information, when compared with the D/HH students with good reading ability and their RL-matched controls. Implications for future studies and educational planning are discussed.
Psychometric Properties of the System for Coding Couples’ Interactions in Therapy - Alcohol
Owens, Mandy D.; McCrady, Barbara S.; Borders, Adrienne Z.; Brovko, Julie M.; Pearson, Matthew R.
2014-01-01
Few systems are available for coding in-session behaviors for couples in therapy. Alcohol Behavior Couples Therapy (ABCT) is an empirically supported treatment, but little is known about its mechanisms of behavior change. In the current study, an adapted version of the Motivational Interviewing for Significant Others coding system was developed into the System for Coding Couples’ Interactions in Therapy – Alcohol (SCCIT-A), which was used to code couples’ interactions and behaviors during ABCT. Results showed good inter-rater reliability of the SCCIT-A and provided evidence that the SCCIT-A may be a promising measure for understanding couples in therapy. A three factor model of the SCCIT-A was examined (Positive, Negative, and Change Talk/Counter-Change Talk) using a confirmatory factor analysis, but model fit was poor. Due to poor model fit, ratios were computed for Positive/Negative ratings and for Change Talk/Counter-Change Talk codes based on previous research in the couples and Motivational Interviewing literature. Post-hoc analyses examined correlations between specific SCCIT-A codes and baseline characteristics and indicated some concurrent validity. Correlations were run between ratios and baseline characteristics; ratios may be an alternative to using the factors from the SCCIT-A. Reliability and validity analyses suggest that the SCCIT-A has the potential to be a useful measure for coding in-session behaviors of both partners in couples therapy and could be used to identify mechanisms of behavior change for ABCT. Additional research is needed to improve the reliability of some codes and to further develop the SCCIT-A and other measures of couples’ interactions in therapy. PMID:25528049
DOE Office of Scientific and Technical Information (OSTI.GOV)
Depriest, Kendall
Unsuccessful attempts by members of the radiation effects community to independently derive the Norgett-Robinson-Torrens (NRT) damage energy factors for silicon in ASTM standard E722-14 led to an investigation of the software coding and data that produced those damage energy factors. The ad hoc collaboration to discover the reason for lack of agreement revealed a coding error and resulted in a report documenting the methodology to produce the response function for the standard. The recommended changes in the NRT damage energy factors for silicon are shown to have significant impact for a narrow energy region of the 1-MeV(Si) equivalent fluence responsemore » function. However, when evaluating integral metrics over all neutrons energies in various spectra important to the SNL electronics testing community, the change in the response results in a small decrease in the total 1- MeV(Si) equivalent fluence of ~0.6% compared to the E722-14 response. Response functions based on the newly recommended NRT damage energy factors have been produced and are available for users of both the NuGET and MCNP codes.« less
Mathematical fundamentals for the noise immunity of the genetic code.
Fimmel, Elena; Strüngmann, Lutz
2018-02-01
Symmetry is one of the essential and most visible patterns that can be seen in nature. Starting from the left-right symmetry of the human body, all types of symmetry can be found in crystals, plants, animals and nature as a whole. Similarly, principals of symmetry are also some of the fundamental and most useful tools in modern mathematical natural science that play a major role in theory and applications. As a consequence, it is not surprising that the desire to understand the origin of life, based on the genetic code, forces us to involve symmetry as a mathematical concept. The genetic code can be seen as a key to biological self-organisation. All living organisms have the same molecular bases - an alphabet consisting of four letters (nitrogenous bases): adenine, cytosine, guanine, and thymine. Linearly ordered sequences of these bases contain the genetic information for synthesis of proteins in all forms of life. Thus, one of the most fascinating riddles of nature is to explain why the genetic code is as it is. Genetic coding possesses noise immunity which is the fundamental feature that allows to pass on the genetic information from parents to their descendants. Hence, since the time of the discovery of the genetic code, scientists have tried to explain the noise immunity of the genetic information. In this chapter we will discuss recent results in mathematical modelling of the genetic code with respect to noise immunity, in particular error-detection and error-correction. We will focus on two central properties: Degeneracy and frameshift correction. Different amino acids are encoded by different quantities of codons and a connection between this degeneracy and the noise immunity of genetic information is a long standing hypothesis. Biological implications of the degeneracy have been intensively studied and whether the natural code is a frozen accident or a highly optimised product of evolution is still controversially discussed. Symmetries in the structure of degeneracy of the genetic code are essential and give evidence of substantial advantages of the natural code over other possible ones. In the present chapter we will present a recent approach to explain the degeneracy of the genetic code by algorithmic methods from bioinformatics, and discuss its biological consequences. The biologists recognised this problem immediately after the detection of the non-overlapping structure of the genetic code, i.e., coding sequences are to be read in a unique way determined by their reading frame. But how does the reading head of the ribosome recognises an error in the grouping of codons, caused by e.g. insertion or deletion of a base, that can be fatal during the translation process and may result in nonfunctional proteins? In this chapter we will discuss possible solutions to the frameshift problem with a focus on the theory of so-called circular codes that were discovered in large gene populations of prokaryotes and eukaryotes in the early 90s. Circular codes allow to detect a frameshift of one or two positions and recently a beautiful theory of such codes has been developed using statistics, group theory and graph theory. Copyright © 2017 Elsevier B.V. All rights reserved.
Niederer, Daniel; Bumann, Anke; Mühlhauser, Yvonne; Schmitt, Mareike; Wess, Katja; Engeroff, Tobias; Wilke, Jan; Vogt, Lutz; Banzer, Winfried
2018-05-01
Mobile phone tasks like texting, typing, and dialling during walking are known to impact gait characteristics. Beyond that, the effects of performing smartphone-typical actions like researching and taking self-portraits (selfie) on gait have not been investigated yet. We aimed to investigate the effects of smartphone usage on relevant gait characteristics and to reveal potential association of basic cognitive and walking plus smartphone dual-task abilities. Our cross-sectional, cross-over study on physically active, healthy participants was performed on two days, interrupted by a 24-h washout in between. Assessments were: 1) Cognitive testing battery consisting of the trail making test (TMT A and B) and the Stroop test 2) Treadmill walking under five smartphone usage conditions: no use (control condition), reading, dialling, internet searching and taking a selfie in randomized order. Kinematic and kinetic gait characteristics were assessed to estimate conditions influence. In our sample of 36 adults (24.6 ± 1 years, 23 female, 13 male), ANCOVAs followed by post-hoc t-tests revealed that smartphone usage impaired all tested gait characteristics: gait speed (decrease, all conditions): F = 54.7, p < 0.001; cadence (increase, all): F = 38.3, p < 0.001; double stride length (decrease, all): F = 33.8, p < 0.001; foot external rotation (increase during dialling, researching, selfie): F = 16.7, p < 0.001; stride length variability (increase): F = 11.7, p < 0.001; step width variability (increase): F = 5.3, p < 0.001; step width (Friedmann test and Wilcoxon Bonferroni-Holm-corrected post-hoc analyses, increase): Z = -2.3 to -2.9; p < 0.05); plantar pressure proportion (increase during reading and researching) (Z = -2.9; p < 0.01). The ability to keep usual gait quality during smartphone usage was systematically associated with the TMT B time regarding cadence and double stride length for reading (r = -0.37), dialling (r = -0.35) and taking a selfie (r = -0.34). Smartphone usage substantially impacts walking characteristics in most situations. Changes of gait patterns indicate higher cognitive loads and lower awareness. Copyright © 2018 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Latham Keh, Melissa Anne
2014-01-01
It is well documented that ELLs face significant challenges as they develop literacy skills in their second language (NCES, 2007, 2011). This population is diverse and growing rapidly in Massachusetts and across the nation (Massachusetts Department of Elementary and Secondary Education, 2013; NCELA, 2011; Orosco, De Schonewise, De Onis, Klingner,…
ERIC Educational Resources Information Center
ERIC Clearinghouse on Reading and Communication Skills, Urbana, IL.
This collection of abstracts is part of a continuing series providing information on recent doctoral dissertations. The 25 titles deal with a variety of topics, including the following: reading achievement as it relates to child dependency, the development of phonological coding, short-term memory and associative learning, variables available in…
Genomics-Based Security Protocols: From Plaintext to Cipherprotein
NASA Technical Reports Server (NTRS)
Shaw, Harry; Hussein, Sayed; Helgert, Hermann
2011-01-01
The evolving nature of the internet will require continual advances in authentication and confidentiality protocols. Nature provides some clues as to how this can be accomplished in a distributed manner through molecular biology. Cryptography and molecular biology share certain aspects and operations that allow for a set of unified principles to be applied to problems in either venue. A concept for developing security protocols that can be instantiated at the genomics level is presented. A DNA (Deoxyribonucleic acid) inspired hash code system is presented that utilizes concepts from molecular biology. It is a keyed-Hash Message Authentication Code (HMAC) capable of being used in secure mobile Ad hoc networks. It is targeted for applications without an available public key infrastructure. Mechanics of creating the HMAC are presented as well as a prototype HMAC protocol architecture. Security concepts related to the implementation differences between electronic domain security and genomics domain security are discussed.
clearScience: Infrastructure for Communicating Data-Intensive Science.
Bot, Brian M; Burdick, David; Kellen, Michael; Huang, Erich S
2013-01-01
Progress in biomedical research requires effective scientific communication to one's peers and to the public. Current research routinely encompasses large datasets and complex analytic processes, and the constraints of traditional journal formats limit useful transmission of these elements. We are constructing a framework through which authors can not only provide the narrative of what was done, but the primary and derivative data, the source code, the compute environment, and web-accessible virtual machines. This infrastructure allows authors to "hand their machine"- prepopulated with libraries, data, and code-to those interested in reviewing or building off of their work. This project, "clearScience," seeks to provide an integrated system that accommodates the ad hoc nature of discovery in the data-intensive sciences and seamless transitions from working to reporting. We demonstrate that rather than merely describing the science being reported, one can deliver the science itself.
Vellas, B; Fielding, R A; Bens, C; Bernabei, R; Cawthon, P M; Cederholm, T; Cruz-Jentoft, A J; Del Signore, S; Donahue, S; Morley, J; Pahor, M; Reginster, J-Y; Rodriguez Mañas, L; Rolland, Y; Roubenoff, R; Sinclair, A; Cesari, M
2018-01-01
Establishment of an ICD-10-CM code for sarcopenia in 2016 was an important step towards reaching international consensus on the need for a nosological framework of age-related skeletal muscle decline. The International Conference on Frailty and Sarcopenia Research Task Force met in April 2017 to discuss the meaning, significance, and barriers to the implementation of the new code as well as strategies to accelerate development of new therapies. Analyses by the Sarcopenia Definitions and Outcomes Consortium are underway to develop quantitative definitions of sarcopenia. A consensus conference is planned to evaluate this analysis. The Task Force also discussed lessons learned from sarcopenia trials that could be applied to future trials, as well as lessons from the osteoporosis field, a clinical condition with many constructs similar to sarcopenia and for which ad hoc treatments have been developed and approved by regulatory agencies.
Identifying students with dyslexia in higher education.
Tops, Wim; Callens, Maaike; Lammertyn, Jan; Van Hees, Valérie; Brysbaert, Marc
2012-10-01
An increasing number of students with dyslexia enter higher education. As a result, there is a growing need for standardized diagnosis. Previous research has suggested that a small number of tests may suffice to reliably assess students with dyslexia, but these studies were based on post hoc discriminant analysis, which tends to overestimate the percentage of systematic variance, and were limited to the English language (and the Anglo-Saxon education system). Therefore, we repeated the research in a non-English language (Dutch) and we selected variables on the basis of a prediction analysis. The results of our study confirm that it is not necessary to administer a wide range of tests to diagnose dyslexia in (young) adults. Three tests sufficed: word reading, word spelling and phonological awareness, in line with the proposal that higher education students with dyslexia continue to have specific problems with reading and writing. We also show that a traditional postdiction analysis selects more variables of importance than the prediction analysis. However, these extra variables explain study-specific variance and do not result in more predictive power of the model.
Greenslade, Kathryn J; Coggins, Truman E
2014-01-01
Identifying what a communication partner is looking at (referential intention) and why (social intention) is essential to successful social communication, and may be challenging for children with social communication deficits. This study explores a clinical task that assesses these intention-reading abilities within an authentic context. To gather evidence of the task's reliability and validity, and to discuss its clinical utility. The intention-reading task was administered to twenty 4-7-year-olds with typical development (TD) and ten with autism spectrum disorder (ASD). Task items were embedded in an authentic activity, and they targeted the child's ability to identify the examiner's referential and social intentions, which were communicated through joint attention behaviours. Reliability and construct validity evidence were addressed using established psychometric methods. Reliability and validity evidence supported the use of task scores for identifying children whose intention-reading warranted concern. Evidence supported the reliability of task administration and coding, and item-level codes were highly consistent with overall task performance. Supporting task validity, group differences aligned with predictions, with children with ASD exhibiting poorer and more variable task scores than children with TD. Also, as predicted, task scores correlated significantly with verbal mental age and ratings of parental concerns regarding social communication abilities. The evidence provides preliminary support for the reliability and validity of the clinical task's scores in assessing young children's real-time intention-reading abilities, which are essential for successful interactions in school and beyond. © 2014 Royal College of Speech and Language Therapists.
Sanders, Elizabeth A; Berninger, Virginia W; Abbott, Robert D
Sequential regression was used to evaluate whether language-related working memory components uniquely predict reading and writing achievement beyond cognitive-linguistic translation for students in Grades 4 through 9 ( N = 103) with specific learning disabilities (SLDs) in subword handwriting (dysgraphia, n = 25), word reading and spelling (dyslexia, n = 60), or oral and written language (oral and written language learning disabilities, n = 18). That is, SLDs are defined on the basis of cascading level of language impairment (subword, word, and syntax/text). A five-block regression model sequentially predicted literacy achievement from cognitive-linguistic translation (Block 1); working memory components for word-form coding (Block 2), phonological and orthographic loops (Block 3), and supervisory focused or switching attention (Block 4); and SLD groups (Block 5). Results showed that cognitive-linguistic translation explained an average of 27% and 15% of the variance in reading and writing achievement, respectively, but working memory components explained an additional 39% and 27% of variance. Orthographic word-form coding uniquely predicted nearly every measure, whereas attention switching uniquely predicted only reading. Finally, differences in reading and writing persisted between dyslexia and dysgraphia, with dysgraphia higher, even after controlling for Block 1 to 4 predictors. Differences in literacy achievement between students with dyslexia and oral and written language learning disabilities were largely explained by the Block 1 predictors. Applications to identifying and teaching students with these SLDs are discussed.
Plessky, Victor P; Reindl, Leonhard M
2010-03-01
SAW tags were invented more than 30 years ago, but only today are the conditions united for mass application of this technology. The devices in the 2.4-GHz ISM band can be routinely produced with optical lithography, high-resolution radar systems can be built up using highly sophisticated, but low-cost RF-chips, and the Internet is available for global access to the tag databases. The "Internet of Things," or I-o-T, will demand trillions of cheap tags and sensors. The SAW tags can overcome semiconductor-based analogs in many aspects: they can be read at a distance of a few meters with readers radiating power levels 2 to 3 orders lower, they are cheap, and they can operate in robust environments. Passive SAW tags are easily combined with sensors. Even the "anti-collision" problem (i.e., the simultaneous reading of many nearby tags) has adequate solutions for many practical applications. In this paper, we discuss the state-of-the-art in the development of SAW tags. The design approaches will be reviewed and optimal tag designs, as well as encoding methods, will be demonstrated. We discuss ways to reduce the size and cost of these devices. A few practical examples of tags using a time-position coding with 10(6) different codes will be demonstrated. Phase-coded devices can additionally increase the number of codes at the expense of a reduction of reading distance. We also discuss new and exciting perspectives of using ultra wide band (UWB) technology for SAW-tag systems. The wide frequency band available for this standard provides a great opportunity for SAW tags to be radically reduced in size to about 1 x 1 mm(2) while keeping a practically infinite number of possible different codes. Finally, the reader technology will be discussed, as well as detailed comparison made between SAW tags and IC-based semiconductor device.
Montandon, P E; Vasserot, A; Stutz, E
1986-01-01
We retrieved a 1.6 kbp intron separating two exons of the psb C gene which codes for the 44 kDa reaction center protein of photosystem II. This intron is 3 to 4 times the size of all previously sequenced Euglena gracilis chloroplast introns. It contains an open reading frame of 458 codons potentially coding for a basic protein of 54 kDa of yet unknown function. The intron boundaries follow consensus sequences established for chloroplast introns related to class II and nuclear pre-mRNA introns. Its 3'-terminal segment has structural features similar to class II mitochondrial introns with an invariant base A as possible branch point for lariat formation.
Exploring the read-write genome: mobile DNA and mammalian adaptation.
Shapiro, James A
2017-02-01
The read-write genome idea predicts that mobile DNA elements will act in evolution to generate adaptive changes in organismal DNA. This prediction was examined in the context of mammalian adaptations involving regulatory non-coding RNAs, viviparous reproduction, early embryonic and stem cell development, the nervous system, and innate immunity. The evidence shows that mobile elements have played specific and sometimes major roles in mammalian adaptive evolution by generating regulatory sites in the DNA and providing interaction motifs in non-coding RNA. Endogenous retroviruses and retrotransposons have been the predominant mobile elements in mammalian adaptive evolution, with the notable exception of bats, where DNA transposons are the major agents of RW genome inscriptions. A few examples of independent but convergent exaptation of mobile DNA elements for similar regulatory rewiring functions are noted.
Prosdocimi, Francisco; Souto, Helena Magarinos; Ruschi, Piero Angeli; Furtado, Carolina; Jennings, W Bryan
2016-09-01
The genome of the versicoloured emerald hummingbird (Amazilia versicolor) was partially sequenced in one-sixth of an Illumina HiSeq lane. The mitochondrial genome was assembled using MIRA and MITObim software, yielding a circular molecule of 16,861 bp in length and deposited in GenBank under the accession number KF624601. The mitogenome contained 13 protein-coding genes, 22 transfer tRNAs, 2 ribosomal RNAs and 1 non-coding control region. The molecule was assembled using 21,927 sequencing reads of 100 bp each, resulting in ∼130 × coverage of uniformly distributed reads along the genome. This is the forth mitochondrial genome described for this highly diverse family of birds and may benefit further phylogenetic, phylogeographic, population genetic and species delimitation studies of hummingbirds.
Death of a dogma: eukaryotic mRNAs can code for more than one protein
Mouilleron, Hélène; Delcourt, Vivian; Roucou, Xavier
2016-01-01
mRNAs carry the genetic information that is translated by ribosomes. The traditional view of a mature eukaryotic mRNA is a molecule with three main regions, the 5′ UTR, the protein coding open reading frame (ORF) or coding sequence (CDS), and the 3′ UTR. This concept assumes that ribosomes translate one ORF only, generally the longest one, and produce one protein. As a result, in the early days of genomics and bioinformatics, one CDS was associated with each protein-coding gene. This fundamental concept of a single CDS is being challenged by increasing experimental evidence indicating that annotated proteins are not the only proteins translated from mRNAs. In particular, mass spectrometry (MS)-based proteomics and ribosome profiling have detected productive translation of alternative open reading frames. In several cases, the alternative and annotated proteins interact. Thus, the expression of two or more proteins translated from the same mRNA may offer a mechanism to ensure the co-expression of proteins which have functional interactions. Translational mechanisms already described in eukaryotic cells indicate that the cellular machinery is able to translate different CDSs from a single viral or cellular mRNA. In addition to summarizing data showing that the protein coding potential of eukaryotic mRNAs has been underestimated, this review aims to challenge the single translated CDS dogma. PMID:26578573
Adaptive EAGLE dynamic solution adaptation and grid quality enhancement
NASA Technical Reports Server (NTRS)
Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.
1992-01-01
In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.
EXODUS II: A finite element data model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schoof, L.A.; Yarberry, V.R.
1994-09-01
EXODUS II is a model developed to store and retrieve data for finite element analyses. It is used for preprocessing (problem definition), postprocessing (results visualization), as well as code to code data transfer. An EXODUS II data file is a random access, machine independent, binary file that is written and read via C, C++, or Fortran library routines which comprise the Application Programming Interface (API).
''The W and M Are Mixing Me Up'': Use of a Visual Code in Verbal Short-Term Memory Tasks
ERIC Educational Resources Information Center
Best, W.; Howard, D.
2005-01-01
When normal participants are presented with written verbal short-term memory tasks (e.g., remembering a set of letters for immediate spoken recall) there is evidence to suggest that the information is re-coded into phonological form. This paper presents a single case study of MJK whose reading follows the pattern of phonological dyslexia. In…
Moggy, Melissa; Pajor, Edmond; Thurston, Wilfreda; Parker, Sarah; Greter, Angela; Schwartzkopf-Genswein, Karen; Campbell, John; Windeyer, M. Claire
2017-01-01
This study describes western Canadian cow-calf producers’ attitudes towards the Code of Practice for the Care and Handling of Beef Cattle (COPB). Most respondents had not read the COPB. Of those familiar with the COPB, most agreed with it, but it did not have a major influence on their decisions. PMID:29089660
Wallot, Sebastian
2014-01-01
The empirical study of reading dates back more than 125 years. But despite this long tradition, the scientific understanding of reading has made rather heterogeneous progress: many factors that influence the process of text reading have been uncovered, but theoretical explanations remain fragmented; no general theory pulls together the diverse findings. A handful of scholars have noted that properties thought to be at the core of the reading process do not actually generalize across different languages or from situations single-word reading to connected text reading. Such observations cast doubt on many of the traditional conceptions about reading. In this article, I suggest that the observed heterogeneity in the research is due to misguided conceptions about the reading process. Particularly problematic are the unrefined notions about meaning which undergird many reading theories: most psychological theories of reading implicitly assume a kind of elemental token semantics, where words serve as stable units of meaning in a text. This conception of meaning creates major conceptual problems. As an alternative, I argue that reading shoud be rather understood as a form of language use, which circumvents many of the conceptual problems and connects reading to a wider range of linguistic communication. Finally, drawing from Wittgenstein, the concept of "language games" is outlined as an approach to language use that can be operationalized scientifically to provide a new foundation for reading research.
Cioffi, Anna Valentina; Ferrara, Diana; Cubellis, Maria Vittoria; Aniello, Francesco; Corrado, Marcella; Liguori, Francesca; Amoroso, Alessandro; Fucci, Laura; Branno, Margherita
2002-08-01
Analysis of the genome structure of the Paracentrotus lividus (sea urchin) DNA methyltransferase (DNA MTase) gene showed the presence of an open reading frame, named METEX, in intron 7 of the gene. METEX expression is developmentally regulated, showing no correlation with DNA MTase expression. In fact, DNA MTase transcripts are present at high concentrations in the early developmental stages, while METEX is expressed at late stages of development. Two METEX cDNA clones (Met1 and Met2) that are different in the 3' end have been isolated in a cDNA library screening. The putative translated protein from Met2 cDNA clone showed similarity with Escherichia coli endonuclease III on the basis of sequence and predictive three-dimensional structure. The protein, overexpressed in E. coli and purified, had functional properties similar to the endonuclease specific for apurinic/apyrimidinic (AP) sites on the basis of the lyase activity. Therefore the open reading frame, present in intron 7 of the P. lividus DNA MTase gene, codes for a functional AP endonuclease designated SuAP1.
McDonald, Elsie
1991-04-24
Shortly after I read Christopher Goodall's excellent piece on pornography (A Social Disease? Nursing Standard, March 20), the British Safety Council launched a code to protect lone women drivers whose cars break down.
Hsu, Chun-Ting; Conrad, Markus; Jacobs, Arthur M
2014-12-03
Immersion in reading, described as a feeling of 'getting lost in a book', is a ubiquitous phenomenon widely appreciated by readers. However, it has been largely ignored in cognitive neuroscience. According to the fiction feeling hypothesis, narratives with emotional contents invite readers more to be empathic with the protagonists and thus engage the affective empathy network of the brain, the anterior insula and mid-cingulate cortex, than do stories with neutral contents. To test the hypothesis, we presented participants with text passages from the Harry Potter series in a functional MRI experiment and collected post-hoc immersion ratings, comparing the neural correlates of passage mean immersion ratings when reading fear-inducing versus neutral contents. Results for the conjunction contrast of baseline brain activity of reading irrespective of emotional content against baseline were in line with previous studies on text comprehension. In line with the fiction feeling hypothesis, immersion ratings were significantly higher for fear-inducing than for neutral passages, and activity in the mid-cingulate cortex correlated more strongly with immersion ratings of fear-inducing than of neutral passages. Descriptions of protagonists' pain or personal distress featured in the fear-inducing passages apparently caused increasing involvement of the core structure of pain and affective empathy the more readers immersed in the text. The predominant locus of effects in the mid-cingulate cortex seems to reflect that the immersive experience was particularly facilitated by the motor component of affective empathy for our stimuli from the Harry Potter series featuring particularly vivid descriptions of the behavioural aspects of emotion.
Readability assessment of patient education materials on major otolaryngology association websites.
Eloy, Jean Anderson; Li, Shawn; Kasabwala, Khushabu; Agarwal, Nitin; Hansberry, David R; Baredes, Soly; Setzen, Michael
2012-11-01
Various otolaryngology associations provide Internet-based patient education material (IPEM) to the general public. However, this information may be written above the fourth- to sixth-grade reading level recommended by the American Medical Association (AMA) and National Institutes of Health (NIH). The purpose of this study was to assess the readability of otolaryngology-related IPEMs on various otolaryngology association websites and to determine whether they are above the recommended reading level for patient education materials. Analysis of patient education materials from 9 major otolaryngology association websites. The readability of 262 otolaryngology-related IPEMs was assessed with 8 numerical and 2 graphical readability tools. Averages were evaluated against national recommendations and between each source using analysis of variance (ANOVA) with post hoc Tukey's honestly significant difference (HSD) analysis. Mean readability scores for each otolaryngology association website were compared. Mean website readability scores using Flesch Reading Ease test, Flesch-Kincaid Grade Level, Coleman-Liau Index, SMOG grading, Gunning Fog Index, New Dale-Chall Readability Formula, FORCAST Formula, New Fog Count Test, Raygor Readability Estimate, and the Fry Readability Graph ranged from 20.0 to 57.8, 9.7 to 17.1, 10.7 to 15.9, 11.6 to 18.2, 10.9 to 15.0, 8.6 to 16.0, 10.4 to 12.1, 8.5 to 11.8, 10.5 to 17.0, and 10.0 to 17.0, respectively. ANOVA results indicate a significant difference (P < .05) between the websites for each individual assessment. The IPEMs found on all otolaryngology association websites exceed the recommended fourth- to sixth-grade reading level.
1983-05-01
empirical erosion model, with use of the debris-layer model optional. 1.1 INTERFACE WITH ISPP ISPP is a collection of computer codes designed to calculate...expansion with the ODK code, 4. A two-dimensional, two-phase nozzle expansion with the TD2P code, 5. A turbulent boundary layer solution along the...INPUT THERMODYNAMIC DATA FOR TEMPERATURESBELOW 300°K OIF NEEDED) NO A• 11 READ SSP NAMELIST (ODE. BAL. ODK . TD2P. TEL. NOZZLE GEOMETRY) PROfLM 2
Non-coding functions of alternative pre-mRNA splicing in development
Mockenhaupt, Stefan; Makeyev, Eugene V.
2015-01-01
A majority of messenger RNA precursors (pre-mRNAs) in the higher eukaryotes undergo alternative splicing to generate more than one mature product. By targeting the open reading frame region this process increases diversity of protein isoforms beyond the nominal coding capacity of the genome. However, alternative splicing also frequently controls output levels and spatiotemporal features of cellular and organismal gene expression programs. Here we discuss how these non-coding functions of alternative splicing contribute to development through regulation of mRNA stability, translational efficiency and cellular localization. PMID:26493705
2011-01-01
Background Electronic patient records are generally coded using extensive sets of codes but the significance of the utilisation of individual codes may be unclear. Item response theory (IRT) models are used to characterise the psychometric properties of items included in tests and questionnaires. This study asked whether the properties of medical codes in electronic patient records may be characterised through the application of item response theory models. Methods Data were provided by a cohort of 47,845 participants from 414 family practices in the UK General Practice Research Database (GPRD) with a first stroke between 1997 and 2006. Each eligible stroke code, out of a set of 202 OXMIS and Read codes, was coded as either recorded or not recorded for each participant. A two parameter IRT model was fitted using marginal maximum likelihood estimation. Estimated parameters from the model were considered to characterise each code with respect to the latent trait of stroke diagnosis. The location parameter is referred to as a calibration parameter, while the slope parameter is referred to as a discrimination parameter. Results There were 79,874 stroke code occurrences available for analysis. Utilisation of codes varied between family practices with intraclass correlation coefficients of up to 0.25 for the most frequently used codes. IRT analyses were restricted to 110 Read codes. Calibration and discrimination parameters were estimated for 77 (70%) codes that were endorsed for 1,942 stroke patients. Parameters were not estimated for the remaining more frequently used codes. Discrimination parameter values ranged from 0.67 to 2.78, while calibration parameters values ranged from 4.47 to 11.58. The two parameter model gave a better fit to the data than either the one- or three-parameter models. However, high chi-square values for about a fifth of the stroke codes were suggestive of poor item fit. Conclusion The application of item response theory models to coded electronic patient records might potentially contribute to identifying medical codes that offer poor discrimination or low calibration. This might indicate the need for improved coding sets or a requirement for improved clinical coding practice. However, in this study estimates were only obtained for a small proportion of participants and there was some evidence of poor model fit. There was also evidence of variation in the utilisation of codes between family practices raising the possibility that, in practice, properties of codes may vary for different coders. PMID:22176509
1988-05-12
the "load IC" menu option. A prompt will appear in the typescript window requesting the name of the knowledge base to be loaded. Enter...highlighted and then a prompt will appear in the typescript window. The prompt will be requesting the name of the file containing the message to be read in...the file name, the system will begin reading in the message. The listified message is echoed back in the typescript window. After that, the screen
Reading color barcodes using visual snakes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaub, Hanspeter
2004-05-01
Statistical pressure snakes are used to track a mono-color target in an unstructured environment using a video camera. The report discusses an algorithm to extract a bar code signal that is embedded within the target. The target is assumed to be rectangular in shape, with the bar code printed in a slightly different saturation and value in HSV color space. Thus, the visual snake, which primarily weighs hue tracking errors, will not be deterred by the presence of the color bar codes in the target. The bar code is generate with the standard 3 of 9 method. Using this method,more » the numeric bar codes reveal if the target is right-side-up or up-side-down.« less
The Nuremberg Code and the Nuremberg Trial. A reappraisal.
Katz, J
1996-11-27
The Nuremberg Code includes 10 principles to guide physician-investigators in experiments involving human subjects. These principles, particularly the first principle on "voluntary consent," primarily were based on legal concepts because medical codes of ethics existent at the time of the Nazi atrocities did not address consent and other safeguards for human subjects. The US judges who presided over the proceedings did not intend the Code to apply only to the case before them, to be a response to the atrocities committed by the Nazi physicians, or to be inapplicable to research as it is customarily carried on in medical institutions. Instead, a careful reading of the judgment suggests that they wrote the Code for the practice of human experimentation whenever it is being conducted.
NASA Astrophysics Data System (ADS)
Heller, René
2018-03-01
The SETI Encryption code, written in Python, creates a message for use in testing the decryptability of a simulated incoming interstellar message. The code uses images in a portable bit map (PBM) format, then writes the corresponding bits into the message, and finally returns both a PBM image and a text (TXT) file of the entire message. The natural constants (c, G, h) and the wavelength of the message are defined in the first few lines of the code, followed by the reading of the input files and their conversion into 757 strings of 359 bits to give one page. Each header of a page, i.e. the little-endian binary code translation of the tempo-spatial yardstick, is calculated and written on-the-fly for each page.
John, Ann; McGregor, Joanne; Fone, David; Dunstan, Frank; Cornish, Rosie; Lyons, Ronan A; Lloyd, Keith R
2016-03-15
The robustness of epidemiological research using routinely collected primary care electronic data to support policy and practice for common mental disorders (CMD) anxiety and depression would be greatly enhanced by appropriate validation of diagnostic codes and algorithms for data extraction. We aimed to create a robust research platform for CMD using population-based, routinely collected primary care electronic data. We developed a set of Read code lists (diagnosis, symptoms, treatments) for the identification of anxiety and depression in the General Practice Database (GPD) within the Secure Anonymised Information Linkage Databank at Swansea University, and assessed 12 algorithms for Read codes to define cases according to various criteria. Annual incidence rates were calculated per 1000 person years at risk (PYAR) to assess recording practice for these CMD between January 1(st) 2000 and December 31(st) 2009. We anonymously linked the 2799 MHI-5 Caerphilly Health and Social Needs Survey (CHSNS) respondents aged 18 to 74 years to their routinely collected GP data in SAIL. We estimated the sensitivity, specificity and positive predictive value of the various algorithms using the MHI-5 as the gold standard. The incidence of combined depression/anxiety diagnoses remained stable over the ten-year period in a population of over 500,000 but symptoms increased from 6.5 to 20.7 per 1000 PYAR. A 'historical' GP diagnosis for depression/anxiety currently treated plus a current diagnosis (treated or untreated) resulted in a specificity of 0.96, sensitivity 0.29 and PPV 0.76. Adding current symptom codes improved sensitivity (0.32) with a marginal effect on specificity (0.95) and PPV (0.74). We have developed an algorithm with a high specificity and PPV of detecting cases of anxiety and depression from routine GP data that incorporates symptom codes to reflect GP coding behaviour. We have demonstrated that using diagnosis and current treatment alone to identify cases for depression and anxiety using routinely collected primary care data will miss a number of true cases given changes in GP recording behaviour. The Read code lists plus the developed algorithms will be applicable to other routinely collected primary care datasets, creating a platform for future e-cohort research into these conditions.
Junhasavasdikul, Detajin; Sukhato, Kanokporn; Srisangkaew, Suthan; Theera-Ampornpunt, Nawanan; Anothaisintawee, Thunyarat; Dellow, Alan
2017-08-01
The objective of this study is to compare the effectiveness of a "cartoon-style" handout with a "traditional-style" handout in a self-study assignment for preclinical medical students. Third-year medical students (n = 93) at the Faculty of Medicine Ramathibodi Hospital, Mahidol University, took a pre-learning assessment of their knowledge of intercostal chest drainage. They were then randomly allocated to receive either a "cartoon-style" or a "traditional-style" handout on the same topic. After studying these over a 2-week period, students completed a post-learning assessment and estimated their levels of reading completion. Of the 79 participants completing the post-learning test, those in the cartoon-style group achieved a score 13.8% higher than the traditional-style group (p = 0.018). A higher proportion of students in the cartoon-style group reported reading ≥75% of the handout content (70.7% versus 42.1%). In post-hoc analyses, students whose cumulative grade point averages (GPA) from previous academic assessments were in the middle and lower range achieved higher scores with the cartoon-style handout than with the traditional one. In the lower-GPA group, the use of a cartoon-style handout was independently associated with a higher score. Students given a cartoon-style handout reported reading more of the material and achieved higher post-learning test scores than students given a traditional handout.
78 FR 57525 - Suspension of Community Eligibility
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-19
... participation status of a community can be obtained from FEMA's Community Status Book (CSB). The CSB is..., Susp.. *do = Ditto. Code for reading third column: Emerg. --Emergency; Reg. --Regular; Susp. --Susp...
78 FR 69001 - Suspension of Community Eligibility
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-18
... participation status of a community can be obtained from FEMA's Community Status Book (CSB). The CSB is.... *-do- =Ditto. Code for reading third column: Emerg.--Emergency; Reg.--Regular; Susp.--Suspension. Dated...
10 CFR 602.19 - Records and data.
Code of Federal Regulations, 2011 CFR
2011-01-01
... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...
10 CFR 602.19 - Records and data.
Code of Federal Regulations, 2010 CFR
2010-01-01
... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...
10 CFR 602.19 - Records and data.
Code of Federal Regulations, 2012 CFR
2012-01-01
... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...
10 CFR 602.19 - Records and data.
Code of Federal Regulations, 2014 CFR
2014-01-01
... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...
10 CFR 602.19 - Records and data.
Code of Federal Regulations, 2013 CFR
2013-01-01
... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...
A Cooperative Downloading Method for VANET Using Distributed Fountain Code.
Liu, Jianhang; Zhang, Wenbin; Wang, Qi; Li, Shibao; Chen, Haihua; Cui, Xuerong; Sun, Yi
2016-10-12
Cooperative downloading is one of the effective methods to improve the amount of downloaded data in vehicular ad hoc networking (VANET). However, the poor channel quality and short encounter time bring about a high packet loss rate, which decreases transmission efficiency and fails to satisfy the requirement of high quality of service (QoS) for some applications. Digital fountain code (DFC) can be utilized in the field of wireless communication to increase transmission efficiency. For cooperative forwarding, however, processing delay from frequent coding and decoding as well as single feedback mechanism using DFC cannot adapt to the environment of VANET. In this paper, a cooperative downloading method for VANET using concatenated DFC is proposed to solve the problems above. The source vehicle and cooperative vehicles encodes the raw data using hierarchical fountain code before they send to the client directly or indirectly. Although some packets may be lost, the client can recover the raw data, so long as it receives enough encoded packets. The method avoids data retransmission due to packet loss. Furthermore, the concatenated feedback mechanism in the method reduces the transmission delay effectively. Simulation results indicate the benefits of the proposed scheme in terms of increasing amount of downloaded data and data receiving rate.
Data Collection Answers - SEER Registrars
Read clarifications to existing coding rules, which should be implemented immediately. Data collection experts from American College of Surgeons Commission on Cancer, CDC National Program of Cancer Registries, and SEER Program compiled these answers.
2012-08-29
The straight lines in Curiosity zigzag track marks are Morse code for JPL. The footprint is an important reference mark that the rover can use to drive more precisely via a system called visual odometry.
78 FR 2624 - Suspension of Community Eligibility
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-14
... current participation status of a community can be obtained from FEMA's Community Status Book (CSB). The.... Emerg; September 10, 1984, Reg; January 16, 2013, Susp. * -do- =Ditto. Code for reading third column...
The Geometric Organizer: A Study Technique.
ERIC Educational Resources Information Center
Derr, Alice M.; Peters, Chris L.
1986-01-01
The geometric organizer, a multisensory technique using visual mnemonic devices that key information to color-coded geometric shapes, can help learning disabled students read, organize, and study information in content subject textbooks. (CL)
Genome assembly from synthetic long read clouds
Kuleshov, Volodymyr; Snyder, Michael P.; Batzoglou, Serafim
2016-01-01
Motivation: Despite rapid progress in sequencing technology, assembling de novo the genomes of new species as well as reconstructing complex metagenomes remains major technological challenges. New synthetic long read (SLR) technologies promise significant advances towards these goals; however, their applicability is limited by high sequencing requirements and the inability of current assembly paradigms to cope with combinations of short and long reads. Results: Here, we introduce Architect, a new de novo scaffolder aimed at SLR technologies. Unlike previous assembly strategies, Architect does not require a costly subassembly step; instead it assembles genomes directly from the SLR’s underlying short reads, which we refer to as read clouds. This enables a 4- to 20-fold reduction in sequencing requirements and a 5-fold increase in assembly contiguity on both genomic and metagenomic datasets relative to state-of-the-art assembly strategies aimed directly at fully subassembled long reads. Availability and Implementation: Our source code is freely available at https://github.com/kuleshov/architect. Contact: kuleshov@stanford.edu PMID:27307620
SOPRA: Scaffolding algorithm for paired reads via statistical optimization.
Dayarian, Adel; Michael, Todd P; Sengupta, Anirvan M
2010-06-24
High throughput sequencing (HTS) platforms produce gigabases of short read (<100 bp) data per run. While these short reads are adequate for resequencing applications, de novo assembly of moderate size genomes from such reads remains a significant challenge. These limitations could be partially overcome by utilizing mate pair technology, which provides pairs of short reads separated by a known distance along the genome. We have developed SOPRA, a tool designed to exploit the mate pair/paired-end information for assembly of short reads. The main focus of the algorithm is selecting a sufficiently large subset of simultaneously satisfiable mate pair constraints to achieve a balance between the size and the quality of the output scaffolds. Scaffold assembly is presented as an optimization problem for variables associated with vertices and with edges of the contig connectivity graph. Vertices of this graph are individual contigs with edges drawn between contigs connected by mate pairs. Similar graph problems have been invoked in the context of shotgun sequencing and scaffold building for previous generation of sequencing projects. However, given the error-prone nature of HTS data and the fundamental limitations from the shortness of the reads, the ad hoc greedy algorithms used in the earlier studies are likely to lead to poor quality results in the current context. SOPRA circumvents this problem by treating all the constraints on equal footing for solving the optimization problem, the solution itself indicating the problematic constraints (chimeric/repetitive contigs, etc.) to be removed. The process of solving and removing of constraints is iterated till one reaches a core set of consistent constraints. For SOLiD sequencer data, SOPRA uses a dynamic programming approach to robustly translate the color-space assembly to base-space. For assessing the quality of an assembly, we report the no-match/mismatch error rate as well as the rates of various rearrangement errors. Applying SOPRA to real data from bacterial genomes, we were able to assemble contigs into scaffolds of significant length (N50 up to 200 Kb) with very few errors introduced in the process. In general, the methodology presented here will allow better scaffold assemblies of any type of mate pair sequencing data.
Biehl, Michael; Sadowski, Peter; Bhanot, Gyan; Bilal, Erhan; Dayarian, Adel; Meyer, Pablo; Norel, Raquel; Rhrissorrakrai, Kahn; Zeller, Michael D.; Hormoz, Sahand
2015-01-01
Motivation: Animal models are widely used in biomedical research for reasons ranging from practical to ethical. An important issue is whether rodent models are predictive of human biology. This has been addressed recently in the framework of a series of challenges designed by the systems biology verification for Industrial Methodology for Process Verification in Research (sbv IMPROVER) initiative. In particular, one of the sub-challenges was devoted to the prediction of protein phosphorylation responses in human bronchial epithelial cells, exposed to a number of different chemical stimuli, given the responses in rat bronchial epithelial cells. Participating teams were asked to make inter-species predictions on the basis of available training examples, comprising transcriptomics and phosphoproteomics data. Results: Here, the two best performing teams present their data-driven approaches and computational methods. In addition, post hoc analyses of the datasets and challenge results were performed by the participants and challenge organizers. The challenge outcome indicates that successful prediction of protein phosphorylation status in human based on rat phosphorylation levels is feasible. However, within the limitations of the computational tools used, the inclusion of gene expression data does not improve the prediction quality. The post hoc analysis of time-specific measurements sheds light on the signaling pathways in both species. Availability and implementation: A detailed description of the dataset, challenge design and outcome is available at www.sbvimprover.com. The code used by team IGB is provided under http://github.com/uci-igb/improver2013. Implementations of the algorithms applied by team AMG are available at http://bhanot.biomaps.rutgers.edu/wiki/AMG-sc2-code.zip. Contact: meikelbiehl@gmail.com PMID:24994890
Injecting Errors for Testing Built-In Test Software
NASA Technical Reports Server (NTRS)
Gender, Thomas K.; Chow, James
2010-01-01
Two algorithms have been conceived to enable automated, thorough testing of Built-in test (BIT) software. The first algorithm applies to BIT routines that define pass/fail criteria based on values of data read from such hardware devices as memories, input ports, or registers. This algorithm simulates effects of errors in a device under test by (1) intercepting data from the device and (2) performing AND operations between the data and the data mask specific to the device. This operation yields values not expected by the BIT routine. This algorithm entails very small, permanent instrumentation of the software under test (SUT) for performing the AND operations. The second algorithm applies to BIT programs that provide services to users application programs via commands or callable interfaces and requires a capability for test-driver software to read and write the memory used in execution of the SUT. This algorithm identifies all SUT code execution addresses where errors are to be injected, then temporarily replaces the code at those addresses with small test code sequences to inject latent severe errors, then determines whether, as desired, the SUT detects the errors and recovers
A method of non-contact reading code based on computer vision
NASA Astrophysics Data System (ADS)
Zhang, Chunsen; Zong, Xiaoyu; Guo, Bingxuan
2018-03-01
With the purpose of guarantee the computer information exchange security between internal and external network (trusted network and un-trusted network), A non-contact Reading code method based on machine vision has been proposed. Which is different from the existing network physical isolation method. By using the computer monitors, camera and other equipment. Deal with the information which will be on exchanged, Include image coding ,Generate the standard image , Display and get the actual image , Calculate homography matrix, Image distort correction and decoding in calibration, To achieve the computer information security, Non-contact, One-way transmission between the internal and external network , The effectiveness of the proposed method is verified by experiments on real computer text data, The speed of data transfer can be achieved 24kb/s. The experiment shows that this algorithm has the characteristics of high security, fast velocity and less loss of information. Which can meet the daily needs of the confidentiality department to update the data effectively and reliably, Solved the difficulty of computer information exchange between Secret network and non-secret network, With distinctive originality, practicability, and practical research value.
Saavedra-Lira, E; Pérez-Montfort, R
1994-05-16
We isolated three overlapping clones from a DNA genomic library of Entamoeba histolytica strain HM1:IMSS, whose translated nucleotide (nt) sequence shows similarities of 51, 48 and 47% with the amino acid (aa) sequences reported for the pyruvate phosphate dikinases from Bacteroides symbiosus, maize and Flaveria trinervia, respectively. The reading frame determined codes for a protein of 886 aa.
Anderson, Michael L; Chemero, Tony
2013-06-01
Clark appears to be moving toward epistemic internalism, which he once rightly rejected. This results from a double over-interpretation of predictive coding's significance. First, Clark argues that predictive coding offers a Grand Unified Theory (GUT) of brain function. Second, he over-reads its epistemic import, perhaps even conflating causal and epistemic mediators. We argue instead for a plurality of neurofunctional principles.
NASA Technical Reports Server (NTRS)
Ryer, M. J.
1978-01-01
HAL/S is a computer programming language; it is a representation for algorithms which can be interpreted by either a person or a computer. HAL/S compilers transform blocks of HAL/S code into machine language which can then be directly executed by a computer. When the machine language is executed, the algorithm specified by the HAL/S code (source) is performed. This document describes how to read and write HAL/S source.
NRL Radar Division C++ Coding Standard
2016-12-05
The coding standard provides tools aimed at helping C++ programmers develop programs that are free of common types of errors, maintainable by...different programmers , portable to other operating systems, easy to read and understand, and have a consistent style. Questions of design, such as how to...mandatory for any organization with quality goals. The purpose of this standard is to provide tools aimed at helping C++ programmers develop programs that
2016-08-01
codes contain control functions (CFs) that are reserved for encoding various controls, identification, and other special- purpose functions. Time...set of CF bits for the encoding of various control, identification, and other special- purpose functions. The control bits may be programmed in any... recycles yearly. • There are 18 CFs occur between position identifiers P6 and P8. Any CF bit or combination of bits can be programmed to read a
ERIC Educational Resources Information Center
Wall, Candace A.; Rafferty, Lisa A.; Camizzi, Mariya A.; Max, Caroline A.; Van Blargan, David M.
2016-01-01
Many students who struggle to obtain the alphabetic principle are at risk for being identified as having a reading disability and would benefit from additional explicit phonics instruction as a remedial measure. In this action research case study, the research team conducted two experiments to investigate the effects of a color-coded, onset-rime,…
A dual-route approach to orthographic processing.
Grainger, Jonathan; Ziegler, Johannes C
2011-01-01
In the present theoretical note we examine how different learning constraints, thought to be involved in optimizing the mapping of print to meaning during reading acquisition, might shape the nature of the orthographic code involved in skilled reading. On the one hand, optimization is hypothesized to involve selecting combinations of letters that are the most informative with respect to word identity (diagnosticity constraint), and on the other hand to involve the detection of letter combinations that correspond to pre-existing sublexical phonological and morphological representations (chunking constraint). These two constraints give rise to two different kinds of prelexical orthographic code, a coarse-grained and a fine-grained code, associated with the two routes of a dual-route architecture. Processing along the coarse-grained route optimizes fast access to semantics by using minimal subsets of letters that maximize information with respect to word identity, while coding for approximate within-word letter position independently of letter contiguity. Processing along the fined-grained route, on the other hand, is sensitive to the precise ordering of letters, as well as to position with respect to word beginnings and endings. This enables the chunking of frequently co-occurring contiguous letter combinations that form relevant units for morpho-orthographic processing (prefixes and suffixes) and for the sublexical translation of print to sound (multi-letter graphemes).
Cai, Yong; Li, Xiwen; Wang, Runmiao; Yang, Qing; Li, Peng; Hu, Hao
2016-01-01
Currently, the chemical fingerprint comparison and analysis is mainly based on professional equipment and software, it’s expensive and inconvenient. This study aims to integrate QR (Quick Response) code with quality data and mobile intelligent technology to develop a convenient query terminal for tracing quality in the whole industrial chain of TCM (traditional Chinese medicine). Three herbal medicines were randomly selected and their chemical two-dimensional barcode (2D) barcodes fingerprints were constructed. Smartphone application (APP) based on Android system was developed to read initial data of 2D chemical barcodes, and compared multiple fingerprints from different batches of same species or different species. It was demonstrated that there were no significant differences between original and scanned TCM chemical fingerprints. Meanwhile, different TCM chemical fingerprint QR codes could be rendered in the same coordinate and showed the differences very intuitively. To be able to distinguish the variations of chemical fingerprint more directly, linear interpolation angle cosine similarity algorithm (LIACSA) was proposed to get similarity ratio. This study showed that QR codes can be used as an effective information carrier to transfer quality data. Smartphone application can rapidly read quality information in QR codes and convert data into TCM chemical fingerprints. PMID:27780256
A Dual-Route Approach to Orthographic Processing
Grainger, Jonathan; Ziegler, Johannes C.
2011-01-01
In the present theoretical note we examine how different learning constraints, thought to be involved in optimizing the mapping of print to meaning during reading acquisition, might shape the nature of the orthographic code involved in skilled reading. On the one hand, optimization is hypothesized to involve selecting combinations of letters that are the most informative with respect to word identity (diagnosticity constraint), and on the other hand to involve the detection of letter combinations that correspond to pre-existing sublexical phonological and morphological representations (chunking constraint). These two constraints give rise to two different kinds of prelexical orthographic code, a coarse-grained and a fine-grained code, associated with the two routes of a dual-route architecture. Processing along the coarse-grained route optimizes fast access to semantics by using minimal subsets of letters that maximize information with respect to word identity, while coding for approximate within-word letter position independently of letter contiguity. Processing along the fined-grained route, on the other hand, is sensitive to the precise ordering of letters, as well as to position with respect to word beginnings and endings. This enables the chunking of frequently co-occurring contiguous letter combinations that form relevant units for morpho-orthographic processing (prefixes and suffixes) and for the sublexical translation of print to sound (multi-letter graphemes). PMID:21716577
The neuronal encoding of information in the brain.
Rolls, Edmund T; Treves, Alessandro
2011-11-01
We describe the results of quantitative information theoretic analyses of neural encoding, particularly in the primate visual, olfactory, taste, hippocampal, and orbitofrontal cortex. Most of the information turns out to be encoded by the firing rates of the neurons, that is by the number of spikes in a short time window. This has been shown to be a robust code, for the firing rate representations of different neurons are close to independent for small populations of neurons. Moreover, the information can be read fast from such encoding, in as little as 20 ms. In quantitative information theoretic studies, only a little additional information is available in temporal encoding involving stimulus-dependent synchronization of different neurons, or the timing of spikes within the spike train of a single neuron. Feature binding appears to be solved by feature combination neurons rather than by temporal synchrony. The code is sparse distributed, with the spike firing rate distributions close to exponential or gamma. A feature of the code is that it can be read by neurons that take a synaptically weighted sum of their inputs. This dot product decoding is biologically plausible. Understanding the neural code is fundamental to understanding not only how the cortex represents, but also processes, information. Copyright © 2011 Elsevier Ltd. All rights reserved.
Richards, Todd L; Abbott, Robert D; Yagle, Kevin; Peterson, Dan; Raskind, Wendy; Berninger, Virginia W
2017-01-01
To understand mental self-government of the developing reading and writing brain, correlations of clustering coefficients on fMRI reading or writing tasks with BASC 2 Adaptivity ratings (time 1 only) or working memory components (time 1 before and time 2 after instruction previously shown to improve achievement and change magnitude of fMRI connectivity) were investigated in 39 students in grades 4 to 9 who varied along a continuum of reading and writing skills. A Philips 3T scanner measured connectivity during six leveled fMRI reading tasks (subword-letters and sounds, word-word-specific spellings or affixed words, syntax comprehension-with and without homonym foils or with and without affix foils, and text comprehension) and three fMRI writing tasks-writing next letter in alphabet, adding missing letter in word spelling, and planning for composing. The Brain Connectivity Toolbox generated clustering coefficients based on the cingulo-opercular (CO) network; after controlling for multiple comparisons and movement, significant fMRI connectivity clustering coefficients for CO were identified in 8 brain regions bilaterally (cingulate gyrus, superior frontal gyrus, middle frontal gyrus, inferior frontal gyrus, superior temporal gyrus, insula, cingulum-cingulate gyrus, and cingulum-hippocampus). BASC2 Parent Ratings for Adaptivity were correlated with CO clustering coefficients on three reading tasks (letter-sound, word affix judgments and sentence comprehension) and one writing task (writing next letter in alphabet). Before instruction, each behavioral working memory measure (phonology, orthography, morphology, and syntax coding, phonological and orthographic loops for integrating internal language and output codes, and supervisory focused and switching attention) correlated significantly with at least one CO clustering coefficient. After instruction, the patterning of correlations changed with new correlations emerging. Results show that the reading and writing brain's mental government, supported by both CO Adaptive Control and multiple working memory components, had changed in response to instruction during middle childhood/early adolescence.
NASA Astrophysics Data System (ADS)
Kraljić, K.; Strüngmann, L.; Fimmel, E.; Gumbel, M.
2018-01-01
The genetic code is degenerated and it is assumed that redundancy provides error detection and correction mechanisms in the translation process. However, the biological meaning of the code's structure is still under current research. This paper presents a Genetic Code Analysis Toolkit (GCAT) which provides workflows and algorithms for the analysis of the structure of nucleotide sequences. In particular, sets or sequences of codons can be transformed and tested for circularity, comma-freeness, dichotomic partitions and others. GCAT comes with a fertile editor custom-built to work with the genetic code and a batch mode for multi-sequence processing. With the ability to read FASTA files or load sequences from GenBank, the tool can be used for the mathematical and statistical analysis of existing sequence data. GCAT is Java-based and provides a plug-in concept for extensibility. Availability: Open source Homepage:http://www.gcat.bio/
Hippocampal Remapping Is Constrained by Sparseness rather than Capacity
Kammerer, Axel; Leibold, Christian
2014-01-01
Grid cells in the medial entorhinal cortex encode space with firing fields that are arranged on the nodes of spatial hexagonal lattices. Potential candidates to read out the space information of this grid code and to combine it with other sensory cues are hippocampal place cells. In this paper, we investigate a population of grid cells providing feed-forward input to place cells. The capacity of the underlying synaptic transformation is determined by both spatial acuity and the number of different spatial environments that can be represented. The codes for different environments arise from phase shifts of the periodical entorhinal cortex patterns that induce a global remapping of hippocampal place fields, i.e., a new random assignment of place fields for each environment. If only a single environment is encoded, the grid code can be read out at high acuity with only few place cells. A surplus in place cells can be used to store a space code for more environments via remapping. The number of stored environments can be increased even more efficiently by stronger recurrent inhibition and by partitioning the place cell population such that learning affects only a small fraction of them in each environment. We find that the spatial decoding acuity is much more resilient to multiple remappings than the sparseness of the place code. Since the hippocampal place code is sparse, we thus conclude that the projection from grid cells to the place cells is not using its full capacity to transfer space information. Both populations may encode different aspects of space. PMID:25474570
Paridaens, Tom; Van Wallendael, Glenn; De Neve, Wesley; Lambert, Peter
2017-05-15
The past decade has seen the introduction of new technologies that lowered the cost of genomic sequencing increasingly. We can even observe that the cost of sequencing is dropping significantly faster than the cost of storage and transmission. The latter motivates a need for continuous improvements in the area of genomic data compression, not only at the level of effectiveness (compression rate), but also at the level of functionality (e.g. random access), configurability (effectiveness versus complexity, coding tool set …) and versatility (support for both sequenced reads and assembled sequences). In that regard, we can point out that current approaches mostly do not support random access, requiring full files to be transmitted, and that current approaches are restricted to either read or sequence compression. We propose AFRESh, an adaptive framework for no-reference compression of genomic data with random access functionality, targeting the effective representation of the raw genomic symbol streams of both reads and assembled sequences. AFRESh makes use of a configurable set of prediction and encoding tools, extended by a Context-Adaptive Binary Arithmetic Coding scheme (CABAC), to compress raw genetic codes. To the best of our knowledge, our paper is the first to describe an effective implementation CABAC outside of its' original application. By applying CABAC, the compression effectiveness improves by up to 19% for assembled sequences and up to 62% for reads. By applying AFRESh to the genomic symbols of the MPEG genomic compression test set for reads, a compression gain is achieved of up to 51% compared to SCALCE, 42% compared to LFQC and 44% compared to ORCOM. When comparing to generic compression approaches, a compression gain is achieved of up to 41% compared to GNU Gzip and 22% compared to 7-Zip at the Ultra setting. Additionaly, when compressing assembled sequences of the Human Genome, a compression gain is achieved up to 34% compared to GNU Gzip and 16% compared to 7-Zip at the Ultra setting. A Windows executable version can be downloaded at https://github.com/tparidae/AFresh . tom.paridaens@ugent.be. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
77 FR 53775 - Suspension of Community Eligibility
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-04
... participation status of a community can be obtained from FEMA's Community Status Book (CSB). The CSB is...; September 5, 2012, Susp.. *do = Ditto. Code for reading third column: Emerg.--Emergency; Reg.--Regular; Susp...
77 FR 74607 - Suspension of Community Eligibility
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-17
... participation status of a community can be obtained from FEMA's Community Status Book (CSB). The CSB is...; ......do Do. Areas.. September 29, 1986, Reg; December 18, 2012, Susp. *......do = Ditto. Code for reading...
Safe Building Code Incentive Act of 2013
Sen. Menendez, Robert [D-NJ
2013-05-08
Senate - 05/08/2013 Read twice and referred to the Committee on Homeland Security and Governmental Affairs. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Safe Building Code Incentive Act of 2012
Sen. Menendez, Robert [D-NJ
2012-12-19
Senate - 12/19/2012 Read twice and referred to the Committee on Homeland Security and Governmental Affairs. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Safe Building Code Incentive Act of 2013
Sen. Menendez, Robert [D-NJ
2013-05-09
Senate - 05/09/2013 Read twice and referred to the Committee on Homeland Security and Governmental Affairs. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Cutting Costly Codes Act of 2013
Sen. Coburn, Tom [R-OK
2013-05-16
Senate - 05/16/2013 Read twice and referred to the Committee on Health, Education, Labor, and Pensions. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Giese, Sven H; Zickmann, Franziska; Renard, Bernhard Y
2014-01-01
Accurate estimation, comparison and evaluation of read mapping error rates is a crucial step in the processing of next-generation sequencing data, as further analysis steps and interpretation assume the correctness of the mapping results. Current approaches are either focused on sensitivity estimation and thereby disregard specificity or are based on read simulations. Although continuously improving, read simulations are still prone to introduce a bias into the mapping error quantitation and cannot capture all characteristics of an individual dataset. We introduce ARDEN (artificial reference driven estimation of false positives in next-generation sequencing data), a novel benchmark method that estimates error rates of read mappers based on real experimental reads, using an additionally generated artificial reference genome. It allows a dataset-specific computation of error rates and the construction of a receiver operating characteristic curve. Thereby, it can be used for optimization of parameters for read mappers, selection of read mappers for a specific problem or for filtering alignments based on quality estimation. The use of ARDEN is demonstrated in a general read mapper comparison, a parameter optimization for one read mapper and an application example in single-nucleotide polymorphism discovery with a significant reduction in the number of false positive identifications. The ARDEN source code is freely available at http://sourceforge.net/projects/arden/.
Rodríguez, Barbara L; Hines, Rachel; Montiel, Miguel
2009-07-01
The aim of this investigation was to describe and compare the communication behaviors and interactive reading strategies used by Mexican American mothers of low- and middle-socioeconomic status (SES) background during shared book reading. Twenty Mexican American mother-child dyads from the Southwestern United States were observed during two book reading sessions. The data were coded across a number of communication behavior categories and were analyzed using the Adult/Child Interactive Reading Inventory (ACIRI; A. DeBruin-Parecki, 1999). Mexican American mothers used a variety of communication behaviors during shared book reading with their preschool children. Significant differences between the SES groups regarding the frequency of specific communication behaviors were revealed. Middle-SES mothers used positive feedback and yes/no questions more often than did low-SES mothers. Mexican American mothers also used a variety of interactive reading strategies with varying frequency, as measured by the ACIRI. They enhanced attention to text some of the time, but rarely promoted interactive reading/supported comprehension or used literacy strategies. There were no significant differences between the SES groups regarding the frequency of interactive reading strategies. Parent literacy programs should supplement Mexican American mothers' communication behaviors and interactive reading strategies to improve effectiveness and participation.
Differences in game reading between selected and non-selected youth soccer players.
Den Hartigh, Ruud J R; Van Der Steen, Steffie; Hakvoort, Bas; Frencken, Wouter G P; Lemmink, Koen A P M
2018-02-01
Applying an established theory of cognitive development-Skill Theory-the current study compares the game-reading skills of youth players selected for a soccer school of a professional soccer club (n = 49) and their non-selected peers (n = 38). Participants described the actions taking place in videos of soccer game plays, and their verbalisations were coded using Skill Theory. Compared to the non-selected players, the selected players generally demonstrated higher levels of complexity in their game-reading, and structured the information of game elements-primarily the player, teammate and field-at higher complexity levels. These results demonstrate how Skill Theory can be used to assess, and distinguish game-reading of youth players with different expertise, a skill important for soccer, but also for other sports.
Tobin, M B; Kovacevic, S; Madduri, K; Hoskins, J A; Skatrud, P L; Vining, L C; Stuttard, C; Miller, J R
1991-01-01
Lysine epsilon-aminotransferase (LAT) in the beta-lactam-producing actinomycetes is considered to be the first step in the antibiotic biosynthetic pathway. Cloning of restriction fragments from Streptomyces clavuligerus, a beta-lactam producer, into Streptomyces lividans, a nonproducer that lacks LAT activity, led to the production of LAT in the host. DNA sequencing of restriction fragments containing the putative lat gene revealed a single open reading frame encoding a polypeptide with an approximately Mr 49,000. Expression of this coding sequence in Escherichia coli led to the production of LAT activity. Hence, LAT activity in S. clavuligerus is derived from a single polypeptide. A second open reading frame began immediately downstream from lat. Comparison of this partial sequence with the sequences of delta-(L-alpha-aminoadipyl)-L-cysteinyl-D valine (ACV) synthetases from Penicillium chrysogenum and Cephalosporium acremonium and with nonribosomal peptide synthetases (gramicidin S and tyrocidine synthetases) found similarities among the open reading frames. Since mapping of the putative N and C termini of S. clavuligerus pcbAB suggests that the coding region occupies approximately 12 kbp and codes for a polypeptide related in size to the fungal ACV synthetases, the molecular characterization of the beta-lactam biosynthetic cluster between pcbC and cefE (approximately 25 kbp) is nearly complete. Images PMID:1917855
Statistical approaches to account for false-positive errors in environmental DNA samples.
Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid
2016-05-01
Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies. © 2015 John Wiley & Sons Ltd.
Coding Complete Genome for the Mogiana Tick Virus, a Jingmenvirus Isolated from Ticks in Brazil
2017-05-04
sequences for all four genome segments. We downloaded the raw Illumina sequence reads from the NCBI Short Read Archive (GenBank...MGTV genome segments through sequence similarity (BLASTN) to the published genome of Jingmen tick virus (JMTV) isolate SY84 (GenBank: KJ001579-KJ001582...2014. Standards for sequencing viral genomes in the era of high-throughput sequencing . MBio 5:e01360–14. 8. Bankevich A, Nurk S, Antipov
"What" and "where" in word reading: ventral coding of written words revealed by parietal atrophy.
Vinckier, Fabien; Naccache, Lionel; Papeix, Caroline; Forget, Joaquim; Hahn-Barma, Valerie; Dehaene, Stanislas; Cohen, Laurent
2006-12-01
The visual system of literate adults develops a remarkable perceptual expertise for printed words. To delineate the aspects of this competence intrinsic to the occipitotemporal "what" pathway, we studied a patient with bilateral lesions of the occipitoparietal "where" pathway. Depending on critical geometric features of the display (rotation angle, letter spacing, mirror reversal, etc.), she switched from a good performance, when her intact ventral pathway was sufficient to encode words, to severely impaired reading, when her parietal lesions prevented the use of alternative reading strategies as a result of spatial and attentional impairments. In particular, reading was disrupted (a) by rotating word by more than 50 degrees , providing an approximation of the invariance range for words encoding in the ventral pathway; (b) by separating letters with double spaces, revealing the limits of letter grouping into perceptual wholes; (c) by mirror-reversing words, showing that words escape the default mirror-invariant representation of visual objects in the ventral pathway. Moreover, because of her parietal lesions, she was unable to discriminate mirror images of common objects, although she was excellent with reversible pseudowords, confirming that the breaking of mirror symmetry was intrinsic to the occipitotemporal cortex. Thus, charting the display conditions associated with preserved or impaired performance allowed us to infer properties of word coding in the normal ventral pathway and to delineate the roles of the parietal lobes in single-word recognition.
Optical encryption and QR codes: secure and noise-free information retrieval.
Barrera, John Fredy; Mira, Alejandro; Torroba, Roberto
2013-03-11
We introduce for the first time the concept of an information "container" before a standard optical encrypting procedure. The "container" selected is a QR code which offers the main advantage of being tolerant to pollutant speckle noise. Besides, the QR code can be read by smartphones, a massively used device. Additionally, QR code includes another secure step to the encrypting benefits the optical methods provide. The QR is generated by means of worldwide free available software. The concept development probes that speckle noise polluting the outcomes of normal optical encrypting procedures can be avoided, then making more attractive the adoption of these techniques. Actual smartphone collected results are shown to validate our proposal.
Burn, K W; Daffara, C; Gualdrini, G; Pierantoni, M; Ferrari, P
2007-01-01
The question of Monte Carlo simulation of radiation transport in voxel geometries is addressed. Patched versions of the MCNP and MCNPX codes are developed aimed at transporting radiation both in the standard geometry mode and in the voxel geometry treatment. The patched code reads an unformatted FORTRAN file derived from DICOM format data and uses special subroutines to handle voxel-to-voxel radiation transport. The various phases of the development of the methodology are discussed together with the new input options. Examples are given of employment of the code in internal and external dosimetry and comparisons with results from other groups are reported.
Radiology and Ethics Education.
Camargo, Aline; Liu, Li; Yousem, David M
2017-09-01
The purpose of this study is to assess medical ethics knowledge among trainees and practicing radiologists through an online survey that included questions about the American College of Radiology Code of Ethics and the American Medical Association Code of Medical Ethics. Most survey respondents reported that they had never read the American Medical Association Code of Medical Ethics or the American College of Radiology Code of Ethics (77.2% and 67.4% of respondents, respectively). With regard to ethics education during medical school and residency, 57.3% and 70.0% of respondents, respectively, found such education to be insufficient. Medical ethics training should be highlighted during residency, at specialty society meetings, and in journals and online resources for radiologists.
Digital barcodes of suspension array using laser induced breakdown spectroscopy
He, Qinghua; Liu, Yixi; He, Yonghong; Zhu, Liang; Zhang, Yilong; Shen, Zhiyuan
2016-01-01
We show a coding method of suspension array based on the laser induced breakdown spectroscopy (LIBS), which promotes the barcodes from analog to digital. As the foundation of digital optical barcodes, nanocrystals encoded microspheres are prepared with self-assembly encapsulation method. We confirm that digital multiplexing of LIBS-based coding method becomes feasible since the microsphere can be coded with direct read-out data of wavelengths, and the method can avoid fluorescence signal crosstalk between barcodes and analyte tags, which lead to overall advantages in accuracy and stability to current fluorescent multicolor coding method. This demonstration increases the capability of multiplexed detection and accurate filtrating, expanding more extensive applications of suspension array in life science. PMID:27808270
n-Nucleotide circular codes in graph theory.
Fimmel, Elena; Michel, Christian J; Strüngmann, Lutz
2016-03-13
The circular code theory proposes that genes are constituted of two trinucleotide codes: the classical genetic code with 61 trinucleotides for coding the 20 amino acids (except the three stop codons {TAA,TAG,TGA}) and a circular code based on 20 trinucleotides for retrieving, maintaining and synchronizing the reading frame. It relies on two main results: the identification of a maximal C(3) self-complementary trinucleotide circular code X in genes of bacteria, eukaryotes, plasmids and viruses (Michel 2015 J. Theor. Biol. 380, 156-177. (doi:10.1016/j.jtbi.2015.04.009); Arquès & Michel 1996 J. Theor. Biol. 182, 45-58. (doi:10.1006/jtbi.1996.0142)) and the finding of X circular code motifs in tRNAs and rRNAs, in particular in the ribosome decoding centre (Michel 2012 Comput. Biol. Chem. 37, 24-37. (doi:10.1016/j.compbiolchem.2011.10.002); El Soufi & Michel 2014 Comput. Biol. Chem. 52, 9-17. (doi:10.1016/j.compbiolchem.2014.08.001)). The univerally conserved nucleotides A1492 and A1493 and the conserved nucleotide G530 are included in X circular code motifs. Recently, dinucleotide circular codes were also investigated (Michel & Pirillo 2013 ISRN Biomath. 2013, 538631. (doi:10.1155/2013/538631); Fimmel et al. 2015 J. Theor. Biol. 386, 159-165. (doi:10.1016/j.jtbi.2015.08.034)). As the genetic motifs of different lengths are ubiquitous in genes and genomes, we introduce a new approach based on graph theory to study in full generality n-nucleotide circular codes X, i.e. of length 2 (dinucleotide), 3 (trinucleotide), 4 (tetranucleotide), etc. Indeed, we prove that an n-nucleotide code X is circular if and only if the corresponding graph [Formula: see text] is acyclic. Moreover, the maximal length of a path in [Formula: see text] corresponds to the window of nucleotides in a sequence for detecting the correct reading frame. Finally, the graph theory of tournaments is applied to the study of dinucleotide circular codes. It has full equivalence between the combinatorics theory (Michel & Pirillo 2013 ISRN Biomath. 2013, 538631. (doi:10.1155/2013/538631)) and the group theory (Fimmel et al. 2015 J. Theor. Biol. 386, 159-165. (doi:10.1016/j.jtbi.2015.08.034)) of dinucleotide circular codes while its mathematical approach is simpler. © 2016 The Author(s).
Death of a dogma: eukaryotic mRNAs can code for more than one protein.
Mouilleron, Hélène; Delcourt, Vivian; Roucou, Xavier
2016-01-08
mRNAs carry the genetic information that is translated by ribosomes. The traditional view of a mature eukaryotic mRNA is a molecule with three main regions, the 5' UTR, the protein coding open reading frame (ORF) or coding sequence (CDS), and the 3' UTR. This concept assumes that ribosomes translate one ORF only, generally the longest one, and produce one protein. As a result, in the early days of genomics and bioinformatics, one CDS was associated with each protein-coding gene. This fundamental concept of a single CDS is being challenged by increasing experimental evidence indicating that annotated proteins are not the only proteins translated from mRNAs. In particular, mass spectrometry (MS)-based proteomics and ribosome profiling have detected productive translation of alternative open reading frames. In several cases, the alternative and annotated proteins interact. Thus, the expression of two or more proteins translated from the same mRNA may offer a mechanism to ensure the co-expression of proteins which have functional interactions. Translational mechanisms already described in eukaryotic cells indicate that the cellular machinery is able to translate different CDSs from a single viral or cellular mRNA. In addition to summarizing data showing that the protein coding potential of eukaryotic mRNAs has been underestimated, this review aims to challenge the single translated CDS dogma. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Non-coding functions of alternative pre-mRNA splicing in development.
Mockenhaupt, Stefan; Makeyev, Eugene V
2015-12-01
A majority of messenger RNA precursors (pre-mRNAs) in the higher eukaryotes undergo alternative splicing to generate more than one mature product. By targeting the open reading frame region this process increases diversity of protein isoforms beyond the nominal coding capacity of the genome. However, alternative splicing also frequently controls output levels and spatiotemporal features of cellular and organismal gene expression programs. Here we discuss how these non-coding functions of alternative splicing contribute to development through regulation of mRNA stability, translational efficiency and cellular localization. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Coal Preparation and Processing Plants New Source Performance Standards (NSPS)
Learn about the NSPS regulation for coal preparation and processing plants by reading the rule summary, the rule history, the code of federal regulation text, the federal register, and additional docket documents
Community Building Code Administration Grant Act of 2009
Sen. Landrieu, Mary L. [D-LA
2009-05-05
Senate - 05/05/2009 Read twice and referred to the Committee on Banking, Housing, and Urban Affairs. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Local Efforts to Reduce Radon Risks - Highlights and Lessons Learned
In these stories, you will read about people who educated their families, neighbors, colleagues,and communities, and challenged local builders, governments, code enforcement officials, and others to protect the public from indoor radon.
Clay Ceramics Manufacturing: National Emission Standards for Hazardous Air Pollutants (NESHAP)
Learn about the NESHAP regulation for clay ceramic manufacturing by reading the rule summary, rule history, code of federal regulations, and the additional resources like fact sheets and background information documents
The Mediterranean Crucible, 1942-1943: Did Technology or Tenets Achieve Air Superiority
2012-06-01
messages of critical Luftwaffe communications. The decryption, analysis, and dissemination of messages from the German Enigma coding machine, facilitated...the ability to “read the Luftwaffe [Enigma] keys in North Africa from the first day of their introduction” in the theater.5 This system, code ...IRIS no. 118168, in USAF Collection, AFHRA, Part IV, 1. 21 AWPD-42, Part IV, 1. superiority which enables its possessor to conduct air
Department of Defense Sustainability: Progress and Plans for the Future
2011-11-02
Teleworking DoDI (Oct 2010) •Sulfur Hexafluoride Risk Management (Oct 2010) •Integrated Solid Waste Management DoDI (being prepared) •Sustainable Ranges (being...electronically track compliance •Air Force stormwater hydrology analysis tool to estimate pre‐ and post‐hydrology IN PROGRESS Teleworking •Coding employees...as: ineligible eligible/regular eligible/ad hoc •Accurately capturing actual time teleworked – still figuring out the best way Sustainable Planning
NASA Astrophysics Data System (ADS)
Huang, Feng; Sun, Lifeng; Zhong, Yuzhuo
2006-01-01
Robust transmission of live video over ad hoc wireless networks presents new challenges: high bandwidth requirements are coupled with delay constraints; even a single packet loss causes error propagation until a complete video frame is coded in the intra-mode; ad hoc wireless networks suffer from bursty packet losses that drastically degrade the viewing experience. Accordingly, we propose a novel UMD coder capable of quickly recovering from losses and ensuring continuous playout. It uses 'peg' frames to prevent error propagation in the High-Resolution (HR) description and improve the robustness of key frames. The Low-Resolution (LR) coder works independent of the HR one, but they can also help each other recover from losses. Like many UMD coders, our UMD coder is drift-free, disruption-tolerant and able to make good use of the asymmetric available bandwidths of multiple paths. The simulation results under different conditions show that the proposed UMD coder has the highest decoded quality and lowest probability of pause when compared with concurrent UMDC techniques. The coder also has a comparable decoded quality, lower startup delay and lower probability of pause than a state-of-the-art FEC-based scheme. To provide robustness for video multicast applications, we propose non-end-to-end UMDC-based video distribution over a multi-tree multicast network. The multiplicity of parents decorrelates losses and the non-end-to-end feature increases the throughput of UMDC video data. We deploy an application-level service of LR description reconstruction in some intermediate nodes of the LR multicast tree. The principle behind this is to reconstruct the disrupted LR frames by the correctly received HR frames. As a result, the viewing experience at the downstream nodes benefits from the protection reconstruction at the upstream nodes.
Calculations concerning the HCO(+)/HOC(+) abundance ratio in dense interstellar clouds
NASA Technical Reports Server (NTRS)
Defrees, D. J.; Mclean, A. D.; Herbst, E.
1984-01-01
Calculations have been performed to determine the rate coefficients of several reactions involved in both the formation and depletion of interstellar HCO(+) and HOC(+). The abundance of HOC(+) deduced from these calculations is consistent with the tentative identification of HOC(+) in Sgr B2 by Woods et al. (1983). The large HCO(+)/HOC(+) abundance ratio observed by Woods et al. is due at least in part to a more rapid formation rate for HCO(+) and probably due as well to a more rapid depletion rate for HOC(+).
An Adaptive Resonance Theory account of the implicit learning of orthographic word forms.
Glotin, H; Warnier, P; Dandurand, F; Dufau, S; Lété, B; Touzet, C; Ziegler, J C; Grainger, J
2010-01-01
An Adaptive Resonance Theory (ART) network was trained to identify unique orthographic word forms. Each word input to the model was represented as an unordered set of ordered letter pairs (open bigrams) that implement a flexible prelexical orthographic code. The network learned to map this prelexical orthographic code onto unique word representations (orthographic word forms). The network was trained on a realistic corpus of reading textbooks used in French primary schools. The amount of training was strictly identical to children's exposure to reading material from grade 1 to grade 5. Network performance was examined at each grade level. Adjustment of the learning and vigilance parameters of the network allowed us to reproduce the developmental growth of word identification performance seen in children. The network exhibited a word frequency effect and was found to be sensitive to the order of presentation of word inputs, particularly with low frequency words. These words were better learned with a randomized presentation order compared with the order of presentation in the school books. These results open up interesting perspectives for the application of ART networks in the study of the dynamics of learning to read. 2009 Elsevier Ltd. All rights reserved.
Auxiliary Library Explorer (ALEX) Development
2016-02-01
non-empty cells. This is a laborious manual task and could probably have been avoided by using Java code to read the data directly from Excel. In fact...it might be even easier to leave the data as a comma separated variables (CSV) file and read the data in with Java , although this could create other...This is first implemented using the MakeFullDatabaseapp Java project, which performs an SQL query on the DSpace data to return a list of items for which
Three computer codes to read, plot and tabulate operational test-site recorded solar data
NASA Technical Reports Server (NTRS)
Stewart, S. D.; Sampson, R. S., Jr.; Stonemetz, R. E.; Rouse, S. L.
1980-01-01
Computer programs used to process data that will be used in the evaluation of collector efficiency and solar system performance are described. The program, TAPFIL, reads data from an IBM 360 tape containing information (insolation, flowrates, temperatures, etc.) from 48 operational solar heating and cooling test sites. Two other programs, CHPLOT and WRTCNL, plot and tabulate the data from the direct access, unformatted TAPFIL file. The methodology of the programs, their inputs, and their outputs are described.
Patel, Harshali K; Bapat, Shweta S; Bhansali, Archita H; Sansgiry, Sujit S
2018-01-01
The objective of this study was to develop a one-page (1-page) prescription drug information leaflet (PILs) and assess their impact on the information processing variables, across 2 levels of patient involvement. One-page PILs were developed using cognitive principles to lower mental effort and improve comprehension. An experimental, 3 × 2 repeated measures study was conducted to determine the impact of cognitive effort, manipulated using leaflet type on comprehension across 2 levels (high/low) of patient involvement. Adults (≥18 years) in a university setting in Houston were recruited for the study. Each participant was exposed to 3 different types of prescription drug information leaflet (the current practice, preexisting 1-page text-only, and 1-page PILs) for the 3 drugs (Celebrex, Ventolin HFA, Prezista) for a given involvement scenario. A prevalidated survey instrument was used to measure product knowledge, attitude toward leaflet, and intention to read. Multivariate analysis of variance indicated significant positive effect of cognitive effort, involvement, and their interaction effect across all measured variables. Mean scores for product knowledge, attitude toward leaflet, and intention to read were highest for PILs ( P < .001), indicating that PILs exerted lowest cognitive effort. Univariate and post hoc analysis indicate that product knowledge significantly increases with high involvement. Patients reading PILs have higher comprehension compared with the current practice and text-only prototype leaflets evaluated. Higher levels of involvement further improve participant knowledge about the drug, increase their intention to read the leaflet, and change their attitude toward the leaflet. Implementation of PILs would improve information processing for consumers by reducing their cognitive effort.
Optical image encryption using QR code and multilevel fingerprints in gyrator transform domains
NASA Astrophysics Data System (ADS)
Wei, Yang; Yan, Aimin; Dong, Jiabin; Hu, Zhijuan; Zhang, Jingtao
2017-11-01
A new concept of GT encryption scheme is proposed in this paper. We present a novel optical image encryption method by using quick response (QR) code and multilevel fingerprint keys in gyrator transform (GT) domains. In this method, an original image is firstly transformed into a QR code, which is placed in the input plane of cascaded GTs. Subsequently, the QR code is encrypted into the cipher-text by using multilevel fingerprint keys. The original image can be obtained easily by reading the high-quality retrieved QR code with hand-held devices. The main parameters used as private keys are GTs' rotation angles and multilevel fingerprints. Biometrics and cryptography are integrated with each other to improve data security. Numerical simulations are performed to demonstrate the validity and feasibility of the proposed encryption scheme. In the future, the method of applying QR codes and fingerprints in GT domains possesses much potential for information security.
Toyota, Toshiaki; Morimoto, Takeshi; Shiomi, Hiroki; Ando, Kenji; Ono, Koh; Shizuta, Satoshi; Kato, Takao; Saito, Naritatsu; Furukawa, Yutaka; Nakagawa, Yoshihisa; Horie, Minoru; Kimura, Takeshi
2017-03-24
Few studies have evaluated the prevalence and clinical outcomes of ad hoc percutaneous coronary intervention (PCI), performing diagnostic coronary angiography and PCI in the same session, in stable coronary artery disease (CAD) patients.Methods and Results:From the CREDO-Kyoto PCI/CABG registry cohort-2, 6,943 patients were analyzed as having stable CAD and undergoing first PCI. Ad hoc PCI and non-ad hoc PCI were performed in 1,722 (24.8%) and 5,221 (75.1%) patients, respectively. The cumulative 5-year incidence and adjusted risk for all-cause death were not significantly different between the 2 groups (15% vs. 15%, P=0.53; hazard ratio: 1.15, 95% confidence interval: 0.98-1.35, P=0.08). Ad hoc PCI relative to non-ad hoc PCI was associated with neutral risk for myocardial infarction, any coronary revascularization, and bleeding, but was associated with a trend towards lower risk for stroke (hazard ratio: 0.78, 95% confidence interval: 0.60-1.02, P=0.06). Ad hoc PCI in stable CAD patients was associated with at least comparable 5-year clinical outcomes as with non-ad hoc PCI. Considering patients' preference and the cost-saving, the ad hoc PCI strategy might be a safe and attractive option for patients with stable CAD, although the prevalence of ad hoc PCI was low in the current study population.
Retell as an Indicator of Reading Comprehension
Reed, Deborah K.; Vaughn, Sharon
2011-01-01
The purpose of this narrative synthesis is to determine the reliability and validity of retell protocols for assessing reading comprehension of students in grades K–12. Fifty-four studies were systematically coded for data related to the administration protocol, scoring procedures, and technical adequacy of the retell component. Retell was moderately correlated with standardized measures of reading comprehension and, with older students, had a lower correlation with decoding and fluency. Literal information was retold more frequently than inferential, and students with learning disabilities or reading difficulties needed more supports to demonstrate adequate recall. Great variability was shown in the prompting procedures, but scoring methods were more consistent across studies. The influences of genre, background knowledge, and organizational features were often specific to particular content, texts, or students. Overall, retell has not yet demonstrated adequacy as a progress monitoring instrument. PMID:23125521
Flexible Vinyl and Urethane Coating and Printing: New Source Performance Standards (NSPS)
Learn about the New Source Performance Standards (NSPS) for flexible vinyl and urethane coating and printing by reading the rule summary, the rule history, the code of federal regulations subpart and related rules
... Central Office-Coding Resources AHA Team Training Health Career Center Health Forum Connect More Regulatory Relief The regulatory burden faced by hospitals is substantial and unsustainable. Read the report . More AHA Opioid Toolkit Stem the Tide: Addressing the Opioid Epidemic More ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ecale Zhou, Carol L.
2016-07-05
Compare Gene Calls (CGC) is a Python code used for combining and comparing gene calls from any number of gene callers. A gene caller is a computer program that predicts the extends of open reading frames within genomes of biological organisms.
Automatic multi-banking of memory for microprocessors
NASA Technical Reports Server (NTRS)
Wiker, G. A. (Inventor)
1984-01-01
A microprocessor system is provided with added memories to expand its address spaces beyond its address word length capacity by using indirect addressing instructions of a type having a detectable operations code and dedicating designated address spaces of memory to each of the added memories, one space to a memory. By decoding each operations code of instructions read from main memory into a decoder to identify indirect addressing instructions of the specified type, and then decoding the address that follows in a decoder to determine which added memory is associated therewith, the associated added memory is selectively enabled through a unit while the main memory is disabled to permit the instruction to be executed on the location to which the effective address of the indirect address instruction points, either before the indirect address is read from main memory or afterwards, depending on how the system is arranged by a switch.
Denburg, Michelle R.; Haynes, Kevin; Shults, Justine; Lewis, James D.; Leonard, Mary B.
2011-01-01
Purpose Chronic kidney disease (CKD) is a prevalent and important outcome and covariate in pharmacoepidemiology. The Health Improvement Network (THIN) in the U.K. represents a unique resource for population-based studies of CKD. We compiled a valid list of Read codes to identify subjects with moderate to advanced CKD. Methods A cross-sectional validation study was performed to identify codes that best define CKD stages 3–5. All subjects with at least one non-zero measure of serum creatinine after 1-Jan-2002 were included. Estimated glomerular filtration rate (eGFR) was calculated according to the Schwartz formula for subjects <18 years and the Modification of Diet in Renal Disease formula for subjects ≥18 years of age. CKD was defined as an eGFR <60 ml/min/1.73m2 on at least two occasions, more than 90 days apart. Results The laboratory definition identified 230,426 subjects with CKD, for a period prevalence in 2008 of 4.56% (95% CI: 4.54, 4.58). A list of 45 Read codes was compiled which yielded a positive predictive value of 88.9% (95% CI: 88.7, 89.1), sensitivity of 48.8%, negative predictive value of 86.5%, and specificity of 98.2%. Of the 11.1% of subjects with a code who did not meet the laboratory definition, 83.6% had at least one eGFR <60. The most commonly used code was for CKD stage 3. Conclusions The proposed list of codes can be used to accurately identify CKD when serum creatinine data are limited. The most sensitive approach for the detection of CKD is to use this list to supplement creatinine measures. PMID:22020900
Denburg, Michelle R; Haynes, Kevin; Shults, Justine; Lewis, James D; Leonard, Mary B
2011-11-01
Chronic kidney disease (CKD) is a prevalent and important outcome and covariate in pharmacoepidemiology. The Health Improvement Network (THIN) in the UK represents a unique resource for population-based studies of CKD. We compiled a valid list of Read codes to identify subjects with moderate to advanced CKD. A cross-sectional validation study was performed to identify codes that best define CKD Stages 3-5. All subjects with at least one non-zero measure of serum creatinine after 1 January 2002 were included. Estimated glomerular filtration rate (eGFR) was calculated according to the Schwartz formula for subjects aged < 18 years and the Modification of Diet in Renal Disease formula for subjects aged ≥ 18 years. CKD was defined as an eGFR <60 mL/minute/1.73 m² on at least two occasions, more than 90 days apart. The laboratory definition identified 230,426 subjects with CKD, for a period prevalence in 2008 of 4.56% (95%CI, 4.54-4.58). A list of 45 Read codes was compiled, which yielded a positive predictive value of 88.9% (95%CI, 88.7-89.1), sensitivity of 48.8%, negative predictive value of 86.5%, and specificity of 98.2%. Of the 11.1% of subjects with a code who did not meet the laboratory definition, 83.6% had at least one eGFR <60. The most commonly used code was for CKD Stage 3. The proposed list of codes can be used to accurately identify CKD when serum creatinine data are limited. The most sensitive approach for the detection of CKD is to use this list to supplement creatinine measures. Copyright © 2011 John Wiley & Sons, Ltd.
Stroboscopic Vision as a Treatment for Retinal Slip Induced Motion Sickness
NASA Technical Reports Server (NTRS)
Reschke, M. F.; Somers, J. T.; Ford, G.; Krnavek, J. M.; Hwang, E. J.; Leigh, R. J.; Estrada, A.
2007-01-01
Motion sickness in the general population is a significant problem driven by the increasingly more sophisticated modes of transportation, visual displays, and virtual reality environments. It is important to investigate non-pharmacological alternatives for the prevention of motion sickness for individuals who cannot tolerate the available anti-motion sickness drugs, or who are precluded from medication because of different operational environments. Based on the initial work of Melvill Jones, in which post hoc results indicated that motion sickness symptoms were prevented during visual reversal testing when stroboscopic vision was used to prevent retinal slip, we have evaluated stroboscopic vision as a method of preventing motion sickness in a number of different environments. Specifically, we have undertaken a five part study that was designed to investigate the effect of stroboscopic vision (either with a strobe light or LCD shutter glasses) on motion sickness while: (1) using visual field reversal, (2) reading while riding in a car (with or without external vision present), (3) making large pitch head movements during parabolic flight, (4) during exposure to rough seas in a small boat, and (5) seated and reading in the cabin area of a UH60 Black Hawk Helicopter during 20 min of provocative flight patterns.
Primary proton and helium spectra around the knee observed by the Tibet air-shower experiment
NASA Astrophysics Data System (ADS)
Jing, Huang; Tibet ASγ Collaboration
A hybrid experiment was carried out to study the cosmic-ray primary composition in the 'knee' energy region. The experimental set-up consists of the Tibet-II air shower array( AS ), the emulsion chamber ( EC ) and the burst detector ( BD ) which are operated simulteneously and provides us information on the primary species. The experiment was carried out at Yangbajing (4,300 m a.s.l., 606 g/cm2) in Tibet during the period from 1996 through 1999. We have already reported the primary proton flux around the knee region based on the simulation code COSMOS. In this paper, we present the primary proton and helium spectra around the knee region. We also extensively examine the simulation codes COSMOS ad-hoc and CORSIKA with interaction models of QGSJET01, DPMJET 2.55, SIBYLL 2.1, VENUS 4.125, HDPM, and NEXUS 2. Based on these calculations, we briefly discuss on the systematic errors involved in our experimental results due to the Monte Carlo simulation.
A Nonlinear Model for Fuel Atomization in Spray Combustion
NASA Technical Reports Server (NTRS)
Liu, Nan-Suey (Technical Monitor); Ibrahim, Essam A.; Sree, Dave
2003-01-01
Most gas turbine combustion codes rely on ad-hoc statistical assumptions regarding the outcome of fuel atomization processes. The modeling effort proposed in this project is aimed at developing a realistic model to produce accurate predictions of fuel atomization parameters. The model involves application of the nonlinear stability theory to analyze the instability and subsequent disintegration of the liquid fuel sheet that is produced by fuel injection nozzles in gas turbine combustors. The fuel sheet is atomized into a multiplicity of small drops of large surface area to volume ratio to enhance the evaporation rate and combustion performance. The proposed model will effect predictions of fuel sheet atomization parameters such as drop size, velocity, and orientation as well as sheet penetration depth, breakup time and thickness. These parameters are essential for combustion simulation codes to perform a controlled and optimized design of gas turbine fuel injectors. Optimizing fuel injection processes is crucial to improving combustion efficiency and hence reducing fuel consumption and pollutants emissions.
Models and Frameworks: A Synergistic Association for Developing Component-Based Applications
Sánchez-Ledesma, Francisco; Sánchez, Pedro; Pastor, Juan A.; Álvarez, Bárbara
2014-01-01
The use of frameworks and components has been shown to be effective in improving software productivity and quality. However, the results in terms of reuse and standardization show a dearth of portability either of designs or of component-based implementations. This paper, which is based on the model driven software development paradigm, presents an approach that separates the description of component-based applications from their possible implementations for different platforms. This separation is supported by automatic integration of the code obtained from the input models into frameworks implemented using object-oriented technology. Thus, the approach combines the benefits of modeling applications from a higher level of abstraction than objects, with the higher levels of code reuse provided by frameworks. In order to illustrate the benefits of the proposed approach, two representative case studies that use both an existing framework and an ad hoc framework, are described. Finally, our approach is compared with other alternatives in terms of the cost of software development. PMID:25147858
Ayachit, Utkarsh; Bauer, Andrew; Duque, Earl P. N.; ...
2016-11-01
A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. Our paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: Scalability, overhead,more » performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.« less
Models and frameworks: a synergistic association for developing component-based applications.
Alonso, Diego; Sánchez-Ledesma, Francisco; Sánchez, Pedro; Pastor, Juan A; Álvarez, Bárbara
2014-01-01
The use of frameworks and components has been shown to be effective in improving software productivity and quality. However, the results in terms of reuse and standardization show a dearth of portability either of designs or of component-based implementations. This paper, which is based on the model driven software development paradigm, presents an approach that separates the description of component-based applications from their possible implementations for different platforms. This separation is supported by automatic integration of the code obtained from the input models into frameworks implemented using object-oriented technology. Thus, the approach combines the benefits of modeling applications from a higher level of abstraction than objects, with the higher levels of code reuse provided by frameworks. In order to illustrate the benefits of the proposed approach, two representative case studies that use both an existing framework and an ad hoc framework, are described. Finally, our approach is compared with other alternatives in terms of the cost of software development.
Validation of asthma recording in the Clinical Practice Research Datalink (CPRD)
Morales, Daniel R; Mullerova, Hana; Smeeth, Liam; Douglas, Ian J; Quint, Jennifer K
2017-01-01
Objectives The optimal method of identifying people with asthma from electronic health records in primary care is not known. The aim of this study is to determine the positive predictive value (PPV) of different algorithms using clinical codes and prescription data to identify people with asthma in the United Kingdom Clinical Practice Research Datalink (CPRD). Methods 684 participants registered with a general practitioner (GP) practice contributing to CPRD between 1 December 2013 and 30 November 2015 were selected according to one of eight predefined potential asthma identification algorithms. A questionnaire was sent to the GPs to confirm asthma status and provide additional information to support an asthma diagnosis. Two study physicians independently reviewed and adjudicated the questionnaires and additional information to form a gold standard for asthma diagnosis. The PPV was calculated for each algorithm. Results 684 questionnaires were sent, of which 494 (72%) were returned and 475 (69%) were complete and analysed. All five algorithms including a specific Read code indicating asthma or non-specific Read code accompanied by additional conditions performed well. The PPV for asthma diagnosis using only a specific asthma code was 86.4% (95% CI 77.4% to 95.4%). Extra information on asthma medication prescription (PPV 83.3%), evidence of reversibility testing (PPV 86.0%) or a combination of all three selection criteria (PPV 86.4%) did not result in a higher PPV. The algorithm using non-specific asthma codes, information on reversibility testing and respiratory medication use scored highest (PPV 90.7%, 95% CI (82.8% to 98.7%), but had a much lower identifiable population. Algorithms based on asthma symptom codes had low PPVs (43.1% to 57.8%)%). Conclusions People with asthma can be accurately identified from UK primary care records using specific Read codes. The inclusion of spirometry or asthma medications in the algorithm did not clearly improve accuracy. Ethics and dissemination The protocol for this research was approved by the Independent Scientific Advisory Committee (ISAC) for MHRA Database Research (protocol number15_257) and the approved protocol was made available to the journal and reviewers during peer review. Generic ethical approval for observational research using the CPRD with approval from ISAC has been granted by a Health Research Authority Research Ethics Committee (East Midlands—Derby, REC reference number 05/MRE04/87). The results will be submitted for publication and will be disseminated through research conferences and peer-reviewed journals. PMID:28801439
Shankar, P Ravi; Dubey, Arun K; Mishra, P; Upadhyay, Dinesh K
2008-01-01
The Manipal College of Medical Sciences, Pokhara, Nepal, admits students from Nepal, India, Sri Lanka, and other countries to the undergraduate medical course. The present study sought to describe and explore reading habits of medical students during the first three semesters and obtain their views regarding inclusion of medical humanities in the course. The authors introduced a voluntary module in medical humanities to the fifth- and sixth-semester students. Gender, semester, and nationality of respondents were noted. Commonly read noncourse books (fiction and nonfiction) were noted. Student attitudes toward medical humanities were studied using a set of nine statements. A total of 165 of the 220 students (75%) participated. Indians followed by Nepalese were the most common nationalities. Romantic fiction and biography were most commonly read. The Alchemist and The Da Vinci Code were commonly read books. Students were in favor of inclusion of medical humanities in the curriculum. The median total score was 30 (maximum possible score = 45). Students read widely beyond their course. The possibility of introducing medical humanities in the curriculum should be explored.
Violent comic books and judgments of relational aggression.
Kirsh, Steven J; Olczak, Paul V
2002-06-01
This study investigated the effects of reading extremely violent versus mildly violent comic books on the interpretation of relational provocation situations. One hundred and seventeen introductory psychology students read either an extremely violent comic book or a mildly violent comic book. After reading the comic books, participants read five hypothetical stories in which a child, caused a relationally aggressive event to occur to another child, but the intent of the provocateur was ambiguous. After each story, participants were asked a series of questions about the provocateur's intent; potential retaliation toward the provocateur; and the provocateur's emotional state. Responses were coded in terms of amount of negative and violent content. Results indicated that participants reading the extremely violent comic books ascribed more hostile intent to the provocateur, suggested more retaliation toward the provocateur, and attributed a more negative emotional state to the provocateur than participants reading the mildly violent comic book. These data suggest that social information processing of relationally aggressive situations is influenced by violent comic books, even if the comic books do not contain themes of relational aggression.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.
1979-07-01
User input data requirements are presented for certain special processors in a nuclear reactor computation system. These processors generally read data in formatted form and generate binary interface data files. Some data processing is done to convert from the user oriented form to the interface file forms. The VENTURE diffusion theory neutronics code and other computation modules in this system use the interface data files which are generated.
Origins of Genes: "Big Bang" or Continuous Creation?
NASA Astrophysics Data System (ADS)
Kesse, Paul K.; Gibbs, Adrian
1992-10-01
Many protein families are common to all cellular organisms, indicating that many genes have ancient origins. Genetic variation is mostly attributed to processes such as mutation, duplication, and rearrangement of ancient modules. Thus it is widely assumed that much of present-day genetic diversity can be traced by common ancestry to a molecular "big bang." A rarely considered alternative is that proteins may arise continuously de novo. One mechanism of generating different coding sequences is by "overprinting," in which an existing nucleotide sequence is translated de novo in a different reading frame or from noncoding open reading frames. The clearest evidence for overprinting is provided when the original gene function is retained, as in overlapping genes. Analysis of their phylogenies indicates which are the original genes and which are their informationally novel partners. We report here the phylogenetic relationships of overlapping coding sequences from steroid-related receptor genes and from tymovirus, luteovirus, and lentivirus genomes. For each pair of overlapping coding sequences, one is confined to a single lineage, whereas the other is more widespread. This suggests that the phylogenetically restricted coding sequence arose only in the progenitor of that lineage by translating an out-of-frame sequence to yield the new polypeptide. The production of novel exons by alternative splicing in thyroid receptor and lentivirus genes suggests that introns can be a valuable evolutionary source for overprinting. New genes and their products may drive major evolutionary changes.
VizieR Online Data Catalog: FAMA code for stellar parameters and abundances (Magrini+, 2013)
NASA Astrophysics Data System (ADS)
Magrini, L.; Randich, S.; Friel, E.; Spina, L.; Jacobson, H.; Cantat-Gaudin, T.; Donati, P.; Baglioni, R.; Maiorca, E.; Bragaglia, A.; Sordo, R.; Vallenari, A.
2013-07-01
FAMA v.1, July 2013, distributed with MOOGv2013 and Kurucz models. Perl Codes: read_out2.pl read_final.pl driver.pl sclipping_26.0.pl sclipping_final.pl sclipping_26.1.pl confronta.pl fama.pl Model atmopheres and interpolator (Kurucz models): MODEL_ATMO MOOG_files: files to compile MOOG (the most recent version of MOOG can be obtained from http://www.as.utexas.edu/~chris/moog.html) FAMAmoogfiles: files to update when compiling MOOG OUTPUT: directory in which the results will be stored, contains a sm macro to produce final plots automoog.par: files with parameters for FAMA 1) OUTPUTdir 2) MOOGdir 3) modelsdir 4) 1.0 (default) percentage of the dispersion of FeI abundances to be considered to compute the errors on the stellar parameters, 1.0 means 100%, thus to compute e.g., the error on Teff we allow to code to find the Teff corresponding to a slope given by σ(FeI)/range(EP). 5) 1.2 (default) σ clipping for FeI lines 6) 1.0 (default) σ clipping for FeII lines 7) 1.0 (default) σ clipping for the other elements 8) 1.0 (default) value of the QP parameter, higher values mean less strong convergence criteria. star.iron: EWs in the correct format to test the code sun.par: initial parameters for the test (1 data file).
Position coding effects in a 2D scenario: the case of musical notation.
Perea, Manuel; García-Chamorro, Cristina; Centelles, Arnau; Jiménez, María
2013-07-01
How does the cognitive system encode the location of objects in a visual scene? In the past decade, this question has attracted much attention in the field of visual-word recognition (e.g., "jugde" is perceptually very close to "judge"). Letter transposition effects have been explained in terms of perceptual uncertainty or shared "open bigrams". In the present study, we focus on note position coding in music reading (i.e., a 2D scenario). The usual way to display music is the staff (i.e., a set of 5 horizontal lines and their resultant 4 spaces). When reading musical notation, it is critical to identify not only each note (temporal duration), but also its pitch (y-axis) and its temporal sequence (x-axis). To examine note position coding, we employed a same-different task in which two briefly and consecutively presented staves contained four notes. The experiment was conducted with experts (musicians) and non-experts (non-musicians). For the "different" trials, the critical conditions involved staves in which two internal notes that were switched vertically, horizontally, or fully transposed--as well as the appropriate control conditions. Results revealed that note position coding was only approximate at the early stages of processing and that this encoding process was modulated by expertise. We examine the implications of these findings for models of object position encoding. Copyright © 2013 Elsevier B.V. All rights reserved.
Quantum-dot-tagged microbeads for multiplexed optical coding of biomolecules.
Han, M; Gao, X; Su, J Z; Nie, S
2001-07-01
Multicolor optical coding for biological assays has been achieved by embedding different-sized quantum dots (zinc sulfide-capped cadmium selenide nanocrystals) into polymeric microbeads at precisely controlled ratios. Their novel optical properties (e.g., size-tunable emission and simultaneous excitation) render these highly luminescent quantum dots (QDs) ideal fluorophores for wavelength-and-intensity multiplexing. The use of 10 intensity levels and 6 colors could theoretically code one million nucleic acid or protein sequences. Imaging and spectroscopic measurements indicate that the QD-tagged beads are highly uniform and reproducible, yielding bead identification accuracies as high as 99.99% under favorable conditions. DNA hybridization studies demonstrate that the coding and target signals can be simultaneously read at the single-bead level. This spectral coding technology is expected to open new opportunities in gene expression studies, high-throughput screening, and medical diagnostics.
An evaluation of Wikipedia as a resource for patient education in nephrology.
Thomas, Garry R; Eng, Lawson; de Wolff, Jacob F; Grover, Samir C
2013-01-01
Wikipedia, a multilingual online encyclopedia, is a common starting point for patient medical searches. As its articles can be authored and edited by anyone worldwide, the credibility of the medical content of Wikipedia has been openly questioned. Wikipedia medical articles have also been criticized as too advanced for the general public. This study assesses the comprehensiveness, reliability, and readability of nephrology articles on Wikipedia. The International Statistical Classification of Diseases and Related problems, 10th Edition (ICD-10) diagnostic codes for nephrology (N00-N29.8) were used as a topic list to investigate the English Wikipedia database. Comprehensiveness was assessed by the proportion of ICD-10 codes that had corresponding articles. Reliability was measured by both the number of references per article and proportion of references from substantiated sources. Finally, readability was assessed using three validated indices (Flesch-Kincaid grade level, Automated readability index, and Flesch reading ease). Nephrology articles on Wikipedia were relatively comprehensive, with 70.5% of ICD-10 codes being represented. The articles were fairly reliable, with 7.1 ± 9.8 (mean ± SD) references per article, of which 59.7 ± 35.0% were substantiated references. Finally, all three readability indices determined that nephrology articles are written at a college level. Wikipedia is a comprehensive and fairly reliable medical resource for nephrology patients that is written at a college reading level. Accessibility of this information for the general public may be improved by hosting it at alternative Wikipedias targeted at a lower reading level, such as the Simple English Wikipedia. © 2013 Wiley Periodicals, Inc.
Ouellette, Gene; Sénéchal, Monique
2017-01-01
In this study we evaluated whether the sophistication of children's invented spellings in kindergarten was predictive of subsequent reading and spelling in Grade 1, while also considering the influence of well-known precursors. Children in their first year of schooling (mean age = 66 months; N = 171) were assessed on measures of oral vocabulary, alphabetic knowledge, phonological awareness, word reading and invented spelling; approximately 1 year later they were assessed on multiple measures of reading and spelling. Path modeling was pursued to evaluate a hypothesized unique, causal role of invented spelling in subsequent literacy outcomes. Results supported a model in which invented spelling contributed directly to concurrent reading along with alphabetic knowledge and phonological awareness. Longitudinally, invented spelling influenced subsequent reading, along with alphabetic knowledge while mediating the connection between phonological awareness and early reading. Invented spelling also influenced subsequent conventional spelling along with phonological awareness, while mediating the influence of alphabetic knowledge. Invented spelling thus adds explanatory variance to literacy outcomes not entirely captured by well-studied code and language-related skills. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Rhoden, D L; Hancock, G A; Miller, J M
1993-03-01
A numerical-code system for the reference identification of Staphylococcus species, Stomatococcus mucilaginosus, and Micrococcus species was established by using a selected panel of conventional biochemicals. Results from 824 cultures (289 eye isolate cultures, 147 reference strains, and 388 known control strains) were used to generate a list of 354 identification code numbers. Each six-digit code number was based on results from 18 conventional biochemical reactions. Seven milliliters of purple agar base with 1% sterile carbohydrate solution added was poured into 60-mm-diameter agar plates. All biochemical tests were inoculated with 1 drop of a heavy broth suspension, incubated at 35 degrees C, and read daily for 3 days. All reactions were read and interpreted by the method of Kloos et al. (G. A. Hebert, C. G. Crowder, G. A. Hancock, W. R. Jarvis, and C. Thornsberry, J. Clin. Microbiol. 26:1939-1949, 1988; W. E. Kloos and D. W. Lambe, Jr., P. 222-237, in A. Balows, W. J. Hansler, Jr., K. L. Herrmann, H. D. Isenberg, and H. J. Shadomy, ed., Manual of Clinical Microbiology, 5th ed., 1991). This modified reference identification method was 96 to 98% accurate and could have value in reference and public health laboratory settings.
Pressure Sensitive Tape and Label Surface Coating Industry: New Source Performance Standards (NSPS)
Learn about the New Source Performance Standards (NSPS) for pressure sensitive tape and label surface coating. Read the rule summary and history, and find the code of federal regulations and federal register citations.
Reading Acquisition and Beyond: Decoding Includes Cognition.
ERIC Educational Resources Information Center
Perfetti, Charles A.
1984-01-01
Focuses on (1) the acquisition and use of word representations and (2) the acquisition of the alphabetic code. Urges that instruction provide conditions to promote the learning of three types of representation--word forms, letter patterns, and mapping. (RDN)
Learn about the NESHAP regulation for brick and structural clay products by reading the rule summary, rule history, code of federal regulations, and the additional resources like fact sheets and background information documents
Polymeric Coating of Supporting Substrates Facilities: New Source Performance Standards (NSPS)
Learn more about the New Source Performance Standards (NSPS) rule for polymeric coating by reading the rule summary, rule history and the code of federal regulations subpart. Information on related rules is also on this page.
learn about the NSPS for municipal solid waste landfills by reading the rule summary, rule history, code of federal regulations text, fact sheets, background information documents, related rules and compliance information.
Atmospheric Science Data Center
2014-04-25
AirMISR WISCONSIN 2000 Project Title: AirMISR Discipline: ... Platform: ER-2 Spatial Coverage: Wisconsin (35.92, 43.79)(-97.94, -90.23) Spatial Resolution: ... Order Data Readme Files: Readme Wisconsin Read Software Files : IDL Code ...
Human Hepatocyte Growth Factor (hHGF)-Modified Hepatic Oval Cells Improve Liver Transplant Survival
Li, Li; Ran, Jiang-Hua; Li, Xue-Hua; Liu, Zhi-Heng; Liu, Gui-Jie; Gao, Yan-Chao; Zhang, Xue-Li; Sun, Hiu-Dong
2012-01-01
Despite progress in the field of immunosuppression, acute rejection is still a common postoperative complication following liver transplantation. This study aims to investigate the capacity of the human hepatocyte growth factor (hHGF) in modifying hepatic oval cells (HOCs) administered simultaneously with orthotopic liver transplantation as a means of improving graft survival. HOCs were activated and isolated using a modified 2-acetylaminofluorene/partial hepatectomy (2-AAF/PH) model in male Lewis rats. A HOC line stably expressing the HGF gene was established following stable transfection of the pBLAST2-hHGF plasmid. Our results demonstrated that hHGF-modified HOCs could efficiently differentiate into hepatocytes and bile duct epithelial cells in vitro. Administration of HOCs at the time of liver transplantation induced a wider distribution of SRY-positive donor cells in liver tissues. Administration of hHGF-HOC at the time of transplantation remarkably prolonged the median survival time and improved liver function for recipients compared to these parameters in the other treatment groups (P<0.05). Moreover, hHGF-HOC administration at the time of liver transplantation significantly suppressed elevation of interleukin-2 (IL-2), tumor necrosis factor-α (TNF-α) and interferon-γ (IFN-γ) levels while increasing the production of IL-10 and TGF-β1 (P<0.05). HOC or hHGF-HOC administration promoted cell proliferation, reduced cell apoptosis, and decreased liver allograft rejection rates. Furthermore, hHGF-modified HOCs more efficiently reduced acute allograft rejection (P<0.05 versus HOC transplantation only). Our results indicate that the combination of hHGF-modified HOCs with liver transplantation decreased host anti-graft immune responses resulting in a reduction of allograft rejection rates and prolonging graft survival in recipient rats. This suggests that HOC-based cell transplantation therapies can be developed as a means of treating severe liver injuries. PMID:23028627
Early literacy experiences constrain L1 and L2 reading procedures
Bhide, Adeetee
2015-01-01
Computational models of reading posit that there are two pathways to word recognition, using sublexical phonology or morphological/orthographic information. They further theorize that everyone uses both pathways to some extent, but the division of labor between the pathways can vary. This review argues that the first language one was taught to read, and the instructional method by which one was taught, can have profound and long-lasting effects on how one reads, not only in one’s first language, but also in one’s second language. Readers who first learn a transparent orthography rely more heavily on the sublexical phonology pathway, and this seems relatively impervious to instruction. Readers who first learn a more opaque orthography rely more on morphological/orthographic information, but the degree to which they do so can be modulated by instructional method. Finally, readers who first learned to read a highly opaque morphosyllabic orthography use less sublexical phonology while reading in their second language than do other second language learners and this effect may be heightened if they were not also exposed to an orthography that codes for phonological units during early literacy acquisition. These effects of early literacy experiences on reading procedure are persistent despite increases in reading ability. PMID:26483714
The Simpsons program 6-D phase space tracking with acceleration
NASA Astrophysics Data System (ADS)
Machida, S.
1993-12-01
A particle tracking code, Simpsons, in 6-D phase space including energy ramping has been developed to model proton synchrotrons and storage rings. We take time as the independent variable to change machine parameters and diagnose beam quality in a quite similar way as real machines, unlike existing tracking codes for synchrotrons which advance a particle element by element. Arbitrary energy ramping and rf voltage curves as a function of time are read as an input file for defining a machine cycle. The code is used to study beam dynamics with time dependent parameters. Some of the examples from simulations of the Superconducting Super Collider (SSC) boosters are shown.
Noncoding sequence classification based on wavelet transform analysis: part I
NASA Astrophysics Data System (ADS)
Paredes, O.; Strojnik, M.; Romo-Vázquez, R.; Vélez Pérez, H.; Ranta, R.; Garcia-Torales, G.; Scholl, M. K.; Morales, J. A.
2017-09-01
DNA sequences in human genome can be divided into the coding and noncoding ones. Coding sequences are those that are read during the transcription. The identification of coding sequences has been widely reported in literature due to its much-studied periodicity. Noncoding sequences represent the majority of the human genome. They play an important role in gene regulation and differentiation among the cells. However, noncoding sequences do not exhibit periodicities that correlate to their functions. The ENCODE (Encyclopedia of DNA elements) and Epigenomic Roadmap Project projects have cataloged the human noncoding sequences into specific functions. We study characteristics of noncoding sequences with wavelet analysis of genomic signals.
Towards a complete map of the human long non-coding RNA transcriptome.
Uszczynska-Ratajczak, Barbara; Lagarde, Julien; Frankish, Adam; Guigó, Roderic; Johnson, Rory
2018-05-23
Gene maps, or annotations, enable us to navigate the functional landscape of our genome. They are a resource upon which virtually all studies depend, from single-gene to genome-wide scales and from basic molecular biology to medical genetics. Yet present-day annotations suffer from trade-offs between quality and size, with serious but often unappreciated consequences for downstream studies. This is particularly true for long non-coding RNAs (lncRNAs), which are poorly characterized compared to protein-coding genes. Long-read sequencing technologies promise to improve current annotations, paving the way towards a complete annotation of lncRNAs expressed throughout a human lifetime.
Schaeffer, E; Sninsky, J J
1984-01-01
Proteins that are related evolutionarily may have diverged at the level of primary amino acid sequence while maintaining similar secondary structures. Computer analysis has been used to compare the open reading frames of the hepatitis B virus to those of the woodchuck hepatitis virus at the level of amino acid sequence, and to predict the relative hydrophilic character and the secondary structure of putative polypeptides. Similarity is seen at the levels of relative hydrophilicity and secondary structure, in the absence of sequence homology. These data reinforce the proposal that these open reading frames encode viral proteins. Computer analysis of this type can be more generally used to establish structural similarities between proteins that do not share obvious sequence homology as well as to assess whether an open reading frame is fortuitous or codes for a protein. PMID:6585835
Reed, Terrie L; Kaufman-Rivi, Diana
2010-01-01
The broad array of medical devices and the potential for device failures, malfunctions, and other adverse events associated with each device creates a challenge for public health device surveillance programs. Coding reported events by type of device problem provides one method for identifying a potential signal of a larger device issue. The Food and Drug Administration's (FDA) Center for Devices and Radiological Health (CDRH) Event Problem Codes that are used to report adverse events previously lacked a structured set of controls for code development and maintenance. Over time this led to inconsistent, ambiguous, and duplicative concepts being added to the code set on an ad-hoc basis. Recognizing the limitation of its coding system the FDA set out to update the system to improve its usefulness within FDA and as a basis of a global standard to identify important patient and device outcomes throughout the medical community. In 2004, FDA and the National Cancer Institute (NCI) signed a Memorandum of Understanding (MOU) whereby NCI agreed to provide terminology development and maintenance services to all FDA Centers. Under this MOU, CDRH's Office of Surveillance and Biometrics (OSB) convened a cross-Center workgroup and collaborated with staff at NCI Enterprise Vocabulary Service (EVS) to streamline the Patient and Device Problem Codes and integrate them into the NCI Thesaurus and Meta-Thesaurus. This initiative included many enhancements to the Event Problem Codes aimed at improving code selection as well as improving adverse event report analysis. LIMITATIONS & RECOMMENDATIONS: Staff resources, database concerns, and limited collaboration with external groups in the initial phases of the project are discussed. Adverse events associated with medical device use can be better understood when they are reported using a consistent and well-defined code set. This FDA initiative was an attempt to improve the structure and add control mechanisms to an existing code set, improve analysis tools that will better identify device safety trends, and improve the ability to prevent or mitigate effects of adverse events associated with medical device use.
Researchers led by Ashish Lal, Ph.D., Investigator in the Genetics Branch, have shown that when the DNA in human colon cancer cells is damaged, a long non-coding RNA (lncRNA) regulates the expression of genes that halt growth, which allows the cells to repair the damage and promote survival. Their findings suggest an important pro-survival function of a lncRNA in cancer cells. Read more...
A bill to amend title 23, United States Code, to reauthorize the State infrastructure bank program.
Sen. Ayotte, Kelly [R-NH
2013-09-26
Senate - 09/26/2013 Read twice and referred to the Committee on Commerce, Science, and Transportation. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
25 CFR 542.8 - What are the minimum internal control standards for pull tabs?
Code of Federal Regulations, 2011 CFR
2011-04-01
... microchip reader, the reader shall be tested periodically to determine that it is correctly reading the bar code or microchip. (iii) If the electronic equipment returns a voucher or a payment slip to the player...
Enhanced Weight based DSR for Mobile Ad Hoc Networks
NASA Astrophysics Data System (ADS)
Verma, Samant; Jain, Sweta
2011-12-01
Routing in ad hoc network is a great problematic, since a good routing protocol must ensure fast and efficient packet forwarding, which isn't evident in ad hoc networks. In literature there exists lot of routing protocols however they don't include all the aspects of ad hoc networks as mobility, device and medium constraints which make these protocols not efficient for some configuration and categories of ad hoc networks. Thus in this paper we propose an improvement of Weight Based DSR in order to include some of the aspects of ad hoc networks as stability, remaining battery power, load and trust factor and proposing a new approach Enhanced Weight Based DSR.
Phase space effects on fast ion distribution function modeling in tokamaks
NASA Astrophysics Data System (ADS)
Podestà, M.; Gorelenkova, M.; Fredrickson, E. D.; Gorelenkov, N. N.; White, R. B.
2016-05-01
Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Podestà, M., E-mail: mpodesta@pppl.gov; Gorelenkova, M.; Fredrickson, E. D.
Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions.more » The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.« less
Phase space effects on fast ion distribution function modeling in tokamaks
White, R. B. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Podesta, M. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Gorelenkova, M. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Fredrickson, E. D. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Gorelenkov, N. N. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)
2016-06-01
Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.
Training alignment parameters for arbitrary sequencers with LAST-TRAIN.
Hamada, Michiaki; Ono, Yukiteru; Asai, Kiyoshi; Frith, Martin C
2017-03-15
LAST-TRAIN improves sequence alignment accuracy by inferring substitution and gap scores that fit the frequencies of substitutions, insertions, and deletions in a given dataset. We have applied it to mapping DNA reads from IonTorrent and PacBio RS, and we show that it reduces reference bias for Oxford Nanopore reads. the source code is freely available at http://last.cbrc.jp/. mhamada@waseda.jp or mcfrith@edu.k.u-tokyo.ac.jp. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
2012-03-01
graphical user interface (GUI) called ALPINE© [18]. Then, it will be converted into a 10 MAT-file that can be read into MATLAB®. At this point...breathing [3]. For comparison purposes, Balocchi et al. recorded the respiratory signal simultaneously with the tachogram (or EKG ) signal. As previously...primary authors, worked to create his own code for implementing the method proposed by Rilling et al. Through reading the BEMD paper and proceeding to
An Annotated Bibliography of Education for Medical Librarianship, 1940-1968
Shirley, Sherrilynne
1969-01-01
An attempt has been made in this bibliography to represent the various viewpoints concerning education for medical librarianship equally. The topics covered include: general background reading and readings for those interested in establishing courses in medical librarianship. The former includes annotations on the history and international aspects of the subject. The latter consists of annotations of articles on early courses and present courses in medical librarianship. A final area discussed is the Medical Library Association's Code for the Training and Certification of Medical Librarians. PMID:4898629
2012-01-01
We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual’s set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of “epigenetic” layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature’s second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution. PMID:22751210
Rodríguez-Rodríguez, José C; Quesada-Arencibia, Alexis; Moreno-Díaz, Roberto; García, Carmelo R
2016-04-13
Expiration date labels are ubiquitous in the food industry. With the passage of time, almost any food becomes unhealthy, even when well preserved. The expiration date is estimated based on the type and manufacture/packaging time of that particular food unit. This date is then printed on the container so it is available to the end user at the time of consumption. MONICOD (MONItoring of CODes); an industrial validator of expiration codes; allows the expiration code printed on a drink can to be read. This verification occurs immediately after printing. MONICOD faces difficulties due to the high printing rate (35 cans per second) and problematic lighting caused by the metallic surface on which the code is printed. This article describes a solution that allows MONICOD to extract shapes and presents quantitative results for the speed and quality.
Radiocarbon-based assessment of fossil fuel-derived contaminant associations in sediments.
White, Helen K; Reddy, Christopher M; Eglinton, Timothy I
2008-08-01
Hydrophobic organic contaminants (HOCs) are associated with natural organic matter (OM) in the environment via mechanisms such as sorption or chemical binding. The latter interactions are difficult to quantitatively constrain, as HOCs can reside in different OM pools outside of conventional analytical windows. Here, we exploited natural abundance variations in radiocarbon (14C) to trace various fossil fuel-derived HOCs (14C-free) within chemically defined fractions of contemporary OM (modern 14C content) in 13 samples including marine and freshwater sediments and one dust and one soil sample. Samples were sequentially treated by solvent extraction followed by saponification. Radiocarbon analysis of the bulk sample and resulting residues was then performed. Fossil fuel-derived HOCs released by these treatments were quantified from an isotope mass balance approach as well as by gas chromatography-mass spectrometry. For the majority of samples (n = 13), 98-100% of the total HOC pool was solvent extractable. Nonextracted HOCs are only significant (29% of total HOC pool)in one sample containing p,p-2,2-bis(chlorophenyl)-1,1,1-trichloroethane and its metabolites. The infrequency of significant incorporation of HOCs into nonextracted OM residues suggests that most HOCs are mobile and bioavailable in the environment and, as such, have a greater potential to exert adverse effects.
Representing metabolic pathway information: an object-oriented approach.
Ellis, L B; Speedie, S M; McLeish, R
1998-01-01
The University of Minnesota Biocatalysis/Biodegradation Database (UM-BBD) is a website providing information and dynamic links for microbial metabolic pathways, enzyme reactions, and their substrates and products. The Compound, Organism, Reaction and Enzyme (CORE) object-oriented database management system was developed to contain and serve this information. CORE was developed using Java, an object-oriented programming language, and PSE persistent object classes from Object Design, Inc. CORE dynamically generates descriptive web pages for reactions, compounds and enzymes, and reconstructs ad hoc pathway maps starting from any UM-BBD reaction. CORE code is available from the authors upon request. CORE is accessible through the UM-BBD at: http://www. labmed.umn.edu/umbbd/index.html.
NASA Astrophysics Data System (ADS)
Teplukhina, A. A.; Sauter, O.; Felici, F.; Merle, A.; Kim, D.; the TCV Team; the ASDEX Upgrade Team; the EUROfusion MST1 Team
2017-12-01
The present work demonstrates the capabilities of the transport code RAPTOR as a fast and reliable simulator of plasma profiles for the entire plasma discharge, i.e. from ramp-up to ramp-down. This code focuses, at this stage, on the simulation of electron temperature and poloidal flux profiles using prescribed equilibrium and some kinetic profiles. In this work we extend the RAPTOR transport model to include a time-varying plasma equilibrium geometry and verify the changes via comparison with ATSRA code simulations. In addition a new ad hoc transport model based on constant gradients and suitable for simulations of L-H and H-L mode transitions has been incorporated into the RAPTOR code and validated with rapid simulations of the time evolution of the safety factor and the electron temperature over the entire AUG and TCV discharges. An optimization procedure for the plasma termination phase has also been developed during this work. We define the goal of the optimization as ramping down the plasma current as fast as possible while avoiding any disruptions caused by reaching physical or technical limits. Our numerical study of this problem shows that a fast decrease of plasma elongation during current ramp-down can help in reducing plasma internal inductance. An early transition from H- to L-mode allows us to reduce the drop in poloidal beta, which is also important for plasma MHD stability and control. This work shows how these complex nonlinear interactions can be optimized automatically using relevant cost functions and constraints. Preliminary experimental results for TCV are demonstrated.
Phonological working memory in German children with poor reading and spelling abilities.
Steinbrink, Claudia; Klatte, Maria
2008-11-01
Deficits in verbal short-term memory have been identified as one factor underlying reading and spelling disorders. However, the nature of this deficit is still unclear. It has been proposed that poor readers make less use of phonological coding, especially if the task can be solved through visual strategies. In the framework of Baddeley's phonological loop model, this study examined serial recall performance in German second-grade children with poor vs good reading and spelling abilities. Children were presented with four-item lists of common nouns for immediate serial recall. Word length and phonological similarity as well as presentation modality (visual vs auditory) and type of recall (visual vs verbal) were varied as within-subject factors in a mixed design. Word length and phonological similarity effects did not differ between groups, thus indicating equal use of phonological coding and rehearsal in poor and good readers. However, in all conditions, except the one that combined visual presentation and visual recall, overall performance was significantly lower in poor readers. The results suggest that the poor readers' difficulties do not arise from an avoidance of the phonological loop, but from its inefficient use. An alternative account referring to unstable phonological representations in long-term memory is discussed. Copyright (c) 2007 John Wiley & Sons, Ltd.
Learn about the NESHAP for inorganic arsenic from glass manufacturing plants by reading the rule summary, the rule history, the code of federal regulations text, additional resources and related rules
Sen. Brown, Sherrod [D-OH
2010-03-26
Senate - 03/26/2010 Read twice and referred to the Committee on Banking, Housing, and Urban Affairs. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Zhao, Jing; Kwok, Rosa K. W.; Liu, Menglian; Liu, Hanlong; Huang, Chen
2017-01-01
Reading fluency is a critical skill to improve the quality of our daily life and working efficiency. The majority of previous studies focused on oral reading fluency rather than silent reading fluency, which is a much more dominant reading mode that is used in middle and high school and for leisure reading. It is still unclear whether the oral and silent reading fluency involved the same underlying skills. To address this issue, the present study examined the relationship between the visual rapid processing and Chinese reading fluency in different modes. Fifty-eight undergraduate students took part in the experiment. The phantom contour paradigm and the visual 1-back task were adopted to measure the visual rapid temporal and simultaneous processing respectively. These two tasks reflected the temporal and spatial dimensions of visual rapid processing separately. We recorded the temporal threshold in the phantom contour task, as well as reaction time and accuracy in the visual 1-back task. Reading fluency was measured in both single-character and sentence levels. Fluent reading of single characters was assessed with a paper-and-pencil lexical decision task, and a sentence verification task was developed to examine reading fluency on a sentence level. The reading fluency test in each level was conducted twice (i.e., oral reading and silent reading). Reading speed and accuracy were recorded. The correlation analysis showed that the temporal threshold in the phantom contour task did not correlate with the scores of the reading fluency tests. Although, the reaction time in visual 1-back task correlated with the reading speed of both oral and silent reading fluency, the comparison of the correlation coefficients revealed a closer relationship between the visual rapid simultaneous processing and silent reading. Furthermore, the visual rapid simultaneous processing exhibited a significant contribution to reading fluency in silent mode but not in oral reading mode. These findings suggest that the underlying mechanism between oral and silent reading fluency is different at the beginning of the basic visual coding. The current results also might reveal a potential modulation of the language characteristics of Chinese on the relationship between visual rapid processing and reading fluency. PMID:28119663
Zhao, Jing; Kwok, Rosa K W; Liu, Menglian; Liu, Hanlong; Huang, Chen
2016-01-01
Reading fluency is a critical skill to improve the quality of our daily life and working efficiency. The majority of previous studies focused on oral reading fluency rather than silent reading fluency, which is a much more dominant reading mode that is used in middle and high school and for leisure reading. It is still unclear whether the oral and silent reading fluency involved the same underlying skills. To address this issue, the present study examined the relationship between the visual rapid processing and Chinese reading fluency in different modes. Fifty-eight undergraduate students took part in the experiment. The phantom contour paradigm and the visual 1-back task were adopted to measure the visual rapid temporal and simultaneous processing respectively. These two tasks reflected the temporal and spatial dimensions of visual rapid processing separately. We recorded the temporal threshold in the phantom contour task, as well as reaction time and accuracy in the visual 1-back task. Reading fluency was measured in both single-character and sentence levels. Fluent reading of single characters was assessed with a paper-and-pencil lexical decision task, and a sentence verification task was developed to examine reading fluency on a sentence level. The reading fluency test in each level was conducted twice (i.e., oral reading and silent reading). Reading speed and accuracy were recorded. The correlation analysis showed that the temporal threshold in the phantom contour task did not correlate with the scores of the reading fluency tests. Although, the reaction time in visual 1-back task correlated with the reading speed of both oral and silent reading fluency, the comparison of the correlation coefficients revealed a closer relationship between the visual rapid simultaneous processing and silent reading. Furthermore, the visual rapid simultaneous processing exhibited a significant contribution to reading fluency in silent mode but not in oral reading mode. These findings suggest that the underlying mechanism between oral and silent reading fluency is different at the beginning of the basic visual coding. The current results also might reveal a potential modulation of the language characteristics of Chinese on the relationship between visual rapid processing and reading fluency.
NASA Astrophysics Data System (ADS)
Jian, Yu-Cin; Wu, Chao-Jung
2015-02-01
We investigated strategies used by readers when reading a science article with a diagram and assessed whether semantic and spatial representations were constructed while reading the diagram. Seventy-one undergraduate participants read a scientific article while tracking their eye movements and then completed a reading comprehension test. Our results showed that the text-diagram referencing strategy was commonly used. However, some readers adopted other reading strategies, such as reading the diagram or text first. We found all readers who had referred to the diagram spent roughly the same amount of time reading and performed equally well. However, some participants who ignored the diagram performed more poorly on questions that tested understanding of basic facts. This result indicates that dual coding theory may be a possible theory to explain the phenomenon. Eye movement patterns indicated that at least some readers had extracted semantic information of the scientific terms when first looking at the diagram. Readers who read the scientific terms on the diagram first tended to spend less time looking at the same terms in the text, which they read after. Besides, presented clear diagrams can help readers process both semantic and spatial information, thereby facilitating an overall understanding of the article. In addition, although text-first and diagram-first readers spent similar total reading time on the text and diagram parts of the article, respectively, text-first readers had significantly less number of saccades of text and diagram than diagram-first readers. This result might be explained as text-directed reading.